GH Python Script Execution Time Increases On Each Successive Run

I have a script that works on individual planar curves and creates hundreds of offset curves with Rhino.Geometry. This is done in an ghpython component, that imports custom python classes. Results are great.

On one curve the script has an acceptable runtime of approx. 10-12 s depending on the inputs.

However this has to be applied to 100 - 200 curves, thus I would like to batch run it for 1 hour and be done with it.

Unfortunately the execution time increases from 10s on the first run, to 20s on the 2nd and so forth. So I can only process really small batches in an acceptable time frame. If I loop over too many curves Rhino becomes unresponsive and the risk of crashing is increased.

It does not matter whether:

  1. I feed the curves as a list access / array and loop over the curves within the python component
  2. I design the component for a single curve and then use implicit looping via “item access” and feed the curves as array.

Runtime/behaviour is almost identical.

I looked for bottlenecks or growing arrays in the code, but just the smallest operations take longer and longer on each subsequent run. e.g. 50 ms to 100 ms to 150 ms.

The heavy lifting is done in Rhino.Geometry with Offsets of NurbsCurves, PolyCurves, Trimming and Joining results etc.

Is there any way to reset and free rhino python ressources within the loop to avoid this behaviour?

The topic has been brought up here, but that is from 2014 and I am not sure that is still an issue:

BR Conrad

For the record: Win11, RH8 SR10 (most recent), python 3 scipts used.

I ran another test ran a different calculation once in the Script Editor & GH Python component. I I ran a method of my objects in the loop for about 600 times. Then I timed the execution time for each operation. See the results below.

Clearly the ghpython execution time seems to increase within the loop, making it prohibitive to use on large loops. In script editor execution seems to be constant.

Anybody can reproduce this as well with their codes?

Nope. You’ll probably need to post some code that demonstrates the issue for any meaningful feedback. Also, there appears to be some confusion about which component is showing this behaviour (i.e. GHPython, the new IronPython, or the new CPython component), which a minimal (non-)working example file would also help clarify.

@cpbln Would you mind running this test on 8.11 SRC as well? It would be great to have a chart for comparison.

No change with SRC 8.11. Behaviour is the same.

Unfortunately I am working with IP related geometry and code so I cannot post the problem as is. I am trying to produce a minimal working example.

I created a simple example, where the behaviour from running the python 3 code from the script editor really differs from the GH python 3 component.

For two curves (top, bottom) points along the x axis are determined on a linear spaced grid. This is done with a world yz plane intersecting with each curve and determining the intersection points.

One iteration takes 40 ms on my machine, that is constant in the python script, no matter how many loops I run (e.g. up to 1000).

In ghPython3 the speed is the same on low number of loops (e.g. 10 or 100). But If I increase to 1000 loops or beyond the runtime increases by factor 3 or 4 towards the end. Also memory consumption seems to rise drastically.

I am not looking for an improvement for this specific code, but understand how to make sure GHpython iteration runtimes show the same behaviour as running a py3-script directly.

Curve_Plane_Intersections_GH_Py3_Speed_issue.3dm (81.2 KB)
Rhino_v8_speed_test.gh (4.3 KB)
Rhino_v8_speed_test.py (1.6 KB)

@cpbln So I tested the 3dm and scripts that you shared here in latest Rhino 8.11. Here is what I am seeing:

I ran the script and definition multiple times and pretty much see similar numbers. Memory consumption is also fairly stable (not rising at an alarming rate) but I know this is a simplified example so you are probably seeing this differently.

Replicating the Issue

Not sure what am I missing on this test. I would be nice to run this test on a clear Rhino (no plugins) on your machine just to make sure other factors are excluded.

Python 3 Performance

To answer your question above, Python 3 performance in both GH and Rhino should be the same. There is a little fluff that Grasshopper component does for managing the input/outputs and execution context that affects the exec time of the component but the overhead should be very small.

Note that Python 3 is running inside of dotnet environment and under current implementation (fork of pythonnet), accessing dotnet properties and methods (e.g. rg.Intersect.Intersection.CurvePlane) are much slower than IronPython. The rest of python 3 (anything not dealing with dotnet) should run as fast as python 3 normally runs.

Hi Ehsan,

Thanks for looking into it. It seems you are getting the results I am looking for. Good to know it works as intended for you. Just predictable runtime that scales with the number of iterations. Just like in python 3 scripts.

However I have two machines with identical hardware side by side and they both produce a different behaviour towards the higher iteration numbers:

  1. one fast iteration (30 - 50 ms)
  2. 5 (sometimes 4) slower iterations 100 ms

Both are Dell Precision M7530, 6 core machine ( i7-8850H CPU @ 2.60GHz ).
This is just with stock Rhino (no special plug-ins, other than pandas, numpy, scipy).

Hi Ehsan,
Strangely enough this problem is solved on 3 different machines within the last 2-3 hours without changing any code or different installations. I can now run 2000 iterations with more or less constant runtimes per iteration.

Thus I believe it must be related to some Windows 11, company software or driver install that was pushed company wide. I noticed that a .NET framework update for Win11 was installed, as well as some other updates.

Unfortunately I cannot pin-point the origin of the problem. But thank you for confirming that the problem is not within Rhino & Grasshopper.

1 Like

Anytime. Makes me happy that I don’t have some bad code slowing things down in Rhino :sweat_smile:

Keep me posted on the progress and new findings. My main focus right now is maturing the new scripting environment in Rhino 8, so more features can be built on top of this in 9.x and GH2

3 Likes

Hi
I’m having the same problem with iterations taking longer and longer as the loop progresses. The piece of code below takes in a list of points and calculates their distance from the origin 500 times in a while loop. I’m running the code in a Python 3 component in Grasshopper.

import Rhino.Geometry as rg
from timeit import default_timer as time

MAX_ITERATIONS = 500

i = 0
while i < MAX_ITERATIONS:
    start_time = time()
    i += 1
    for point in points:
        point.DistanceTo(rg.Point3d(0,0,0))
    print(time() - start_time)

Graphing the time output I get this:

I’m guessing the cause of my problem could be similar as for cpbln, but I have no idea what the offending software could be. I’m running my private laptop which is a fairly new Lenovo Yoga Pro 9i. Is there an easy way to find which programs could be interfering with the this dotnet environment? Alternatively, is there some way to “reset” or clean up the dotnet environment from inside the script? Like whatever happens between each time I run the Python 3 component in Grasshopper.

The execution time was constant for me (although about 1 in 20 of them took 10-20 times as long):

import Rhino.Geometry as rg
import Rhino
from timeit import default_timer as time

import scriptcontext as sc
import rhinoscriptsyntax as rs

sc.doc = ghdoc #Rhino.RhinoDoc.ActiveDoc

MAX_ITERATIONS = 500

# points = rs.ObjectsByType(1)

i = 0
while i < MAX_ITERATIONS:
    start_time = time()
    i += 1
    for point in x:
        geom = sc.doc.Objects.FindGeometry(point)
        geom.Location.DistanceTo(rg.Point3d(0,0,0))
    print(time() - start_time)
            {0;0}
0. 0.0226278000000093
1. 0.0004813000000467582
2. 0.0004707000000507833
3. 0.00046900000006644404
4. 0.0004696999999396212
5. 0.00046059999999670254
6. 0.0004608000000416723
7. 0.0004617999999254607
8. 0.0026321000000280037
9. 0.0004709999999477077
10. 0.0006005000000186556
11. 0.00046620000000530126
12. 0.0004636000001028151
13. 0.00046320000001287553
14. 0.0004643999999416337

486. 0.0004629000000022643
487. 0.00046620000000530126
488. 0.0004676999999446707
489. 0.0004652999999734675
490. 0.00046359999998912826
491. 0.000459499999919899
492. 0.00046010000005480833
493. 0.0004589999999780048
494. 0.0004604000000654196
495. 0.00046669999994719547
496. 0.0004634000000578453
497. 0.0004917999999634048
498. 0.004219899999952759
499. 0.0004698000000189495

1 Like

Hi
Did you use the Python 3 module in Grasshopper? I get a similar result to what you got when I run the script from Rhino Python Editor. Constant execution times with spikes regularly every 20th iteration or so. When running the script from the Python 2 module in Grasshopper I get the same, but with much smaller spikes for some reason. The problem seems to only be with the Python 3 module where iterations start out slower than with the other two methods, and then keeps getting slower with each iteration.

1 Like

This was in a CPython3 component in Grasshopper. Windows 11. SR17 or 18.

1 Like

I noticed your iteration times are pretty short. I suspect you would have to increase the number of points by a factor of 10 to see the slow down I’m seeing. Then the spikes will come more frequently and with increasing duration. Actually the length of each iteration stays constant for most of the iterations throughout the test for me too, but the spikes get higher and higher.

I tried running the same script on my old laptop and got the same slowing performance as on my new laptop.

I had to manual place 15 or 20 points, as well as tweak the code for how I select the points in a Geom param.

If you supply a complete working example .gh file and a .3dm if needed, I’ll give it a whirl no matter how many points are in it.

Here is a working example .gh file. I set the slider for number of points to 200, but feel to crank it up if it finishes very quickly(less than a couple of seconds). Thank you for giving it a go.
Python3Slowdowntest.gh (13.2 KB)

Nice one. I see similar behaviour for 681 points.

0. 0.017192
1. 0.015504
2. 0.017716
3. 0.018031
4. 0.016952
5. 0.018165
6. 0.015927

487. 0.129689
488. 0.124775
489. 0.071053
490. 0.129052
491. 0.087182
492. 0.133688
493. 0.074047
494. 0.134724
495. 0.093917
496. 0.12964
497. 0.071426
498. 0.124859
499. 0.113536

I hadn’t discovered the QuickGraph before - cheers. My code takes a similar time with 670 points in the .3dm file:

0. 0.05736449999994875
1. 0.029464200000006713
2. 0.032461600000033286
3. 0.03264159999991989
4. 0.10517049999998562
5. 0.029946300000005976
6. 0.032949899999948684
7. 0.036111700000105884
8. 0.03233900000009271
9. 0.03666259999999966
10. 0.03342120000002069

484. 0.4381488999999874
485. 0.5273559999999407
486. 0.39547859999993307
487. 0.1829550999999583
488. 0.26826949999997396
489. 0.20282109999993736
490. 0.2688329000000067
491. 0.2678227000000106
492. 0.18570279999994455
493. 0.26804949999996097
494. 0.1880032999999912
495. 0.2684928999999556
496. 0.18573020000008
497. 0.27290229999994153
498. 0.2685250999999198
499. 0.16962230000001455