Clusters is not freeing up RAM?

,

Hi

It looks like memory is not freed from Grasshopper cluster.
If I generate some points in a grasshopper definition and run it 100 times the RAM used by Rhino is 1.3gb
if I put the same component in a cluster the RAM being reserved by Rhino is 26.7gb

I’m looping using the grasshopper player like this:

from Rhino import RhinoApp

for i in range(100):
    RhinoApp.RunScript("-GrasshopperPlayer /Users/johanpedersen/Desktop/test.gh",False)

not in cluster


in cluster


2 Likes

I am assuming you are running the python script inside Rhino right? Would you mind dropping the grasshopper file as well so I can test?

Hi @eirannejad

Yes the loop run in Rhino.
This is the file.
file_to_run.gh (7.9 KB)

import Rhino
# This file is run from script editor in Rhino
for i in range(50):
    Rhino.RhinoApp.RunScript("-GrasshopperPlayer /Users/johanpedersen/Desktop/file_to_run.gh", False)

@Johan_Lund_Pedersen Is this still an issue or is related to this?

Hi @eirannejad

I still think this is an issue. In the code in the previous post, I’m not using a headless doc.
I tried the code below but it doesn’t work. I am unsure if I’m doing it wrong or if the grasshopper player is not working with a headless doc?

Regardless I understand the grasshopper player to be a tool that is meant to be used while modelling in Rhino, to do some complex geometry manipulation. If RAM is not freed after this operation I think there is an issue.

import Rhino

for i in range(5):
    doc = Rhino.RhinoDoc.CreateHeadless(None)
    opened = doc is not None
    if opened:
        doc_id = doc.RuntimeSerialNumber
        Rhino.RhinoApp.RunScript(doc_id, "-GrasshopperPlayer /Users/johanpedersen/Desktop/RhinoForum/clusters/file_to_run.gh", False)
        doc.Dispose()

Hi @eirannejad

Were you able to reproduce this?

To be a bit more precise on the issue. I tried this.
The two gh files hold the same SquareGrid component but one of them is inside of a cluster.

When I run the file where the component is not in a cluster the memory use seems relatively stable. When I run the one where the compoent is inside of a cluster the memory used grows steadily. And by quite a lot.

file_to_run.gh (3.3 KB)
file_to_run_cluster.gh (4.8 KB)

Version 8 (8.5.24072.13002, 2024-03-12)
Rhino Mac - M1

# r: psutil
import psutil
import Rhino

ite = 100

process = None
for proc in psutil.process_iter():
    if proc.name() == "Rhinoceros":
        process = proc

baseline = process.memory_info().vms / (1024**2)

print("not in cluster")
for i in range(ite):
    Rhino.RhinoApp.RunScript(r'-GrasshopperPlayer /Users/johanpedersen/Desktop/RhinoForum/clusters/file_to_run.gh', False)
    diff = process.memory_info().vms / (1024**2) - baseline
    print(diff)

baseline = process.memory_info().vms / (1024**2)

print("in cluster")
for i in range(ite):
    Rhino.RhinoApp.RunScript(r'-GrasshopperPlayer /Users/johanpedersen/Desktop/RhinoForum/clusters/file_to_run_cluster.gh', False)
    diff = process.memory_info().vms / (1024**2) - baseline
    print(diff)

result:

not in cluster
52.4375
53.90625
331.109375
331.375
331.65625
331.671875
331.6875
331.703125
331.796875
329.984375
330.0
329.9375
329.9375
329.90625
329.90625
329.921875
329.9375
329.90625
329.921875
329.90625
329.921875
329.90625
329.90625
329.921875
329.953125
329.90625
329.984375
329.984375
330.015625
328.8125
328.265625
328.28125
328.265625
328.265625
328.28125
328.3125
328.359375
328.265625
328.296875
328.265625
327.375
327.40625
327.4375
327.359375
327.359375
327.359375
327.375
327.359375
327.484375
327.5
327.484375
327.46875
327.453125
327.453125
327.46875
327.453125
327.46875
327.515625
327.53125
327.453125
327.484375
327.453125
327.453125
327.09375
326.703125
326.71875
326.75
326.765625
326.703125
326.703125
326.71875
326.15625
326.171875
326.15625
326.15625
326.1875
326.15625
326.171875
326.15625
326.171875
326.21875
326.234375
326.15625
326.15625
326.1875
326.203125
326.234375
326.15625
326.171875
325.640625
325.609375
325.625
325.609375
325.625
325.609375
325.609375
325.625
325.609375
325.609375
325.609375
in cluster
0.015625
-0.0625
-0.15625
0.25
0.5625
0.59375
284.015625
284.09375
284.09375
539.984375
539.984375
540.015625
540.046875
796.078125
796.0
796.0
796.0
1052.03125
1052.09375
1052.015625
1308.0625
1308.09375
1307.484375
1307.515625
1563.53125
1563.546875
1563.5
1563.515625
1820.0
1820.015625
1820.03125
2076.0
2076.0
2073.78125
2073.78125
2329.8125
2329.84375
2329.796875
2329.796875
2585.828125
2585.8125
2585.828125
2841.828125
2841.84375
2841.875
2841.90625
3152.625
3152.625
3152.34375
3152.34375
3408.359375
3408.921875
3408.90625
3664.9375
3664.921875
3664.9375
3664.921875
3920.9375
3920.9375
3920.953125
3920.9375
4176.96875
4176.953125
4176.984375
4432.96875
4432.984375
4432.96875
4432.96875
4688.984375
4689.015625
4689.046875
4688.984375
5054.140625
5054.140625
5053.59375
5309.65625
5309.6875
5309.703125
5309.765625
5565.78125
5565.796875
5565.875
5565.875
5821.921875
5821.9375
5821.96875
6079.75
6079.765625
6079.375
6079.390625
6335.390625
6335.390625
6335.421875
6335.453125
6591.40625
6591.40625
6591.421875
6855.46875
6855.5
6855.53125
1 Like

I am also interested in knowing about this, as it could perhaps be one of the reason I am encountering overall poor performance in a script with a lot of clusters. Sometimes the RAM gets filled up to 100%, sometimes not, with the same input. Usually when it happens, killing Rhino and re-opening it seems to fix the issue. I feel like there is something unstable with those clusters, any support there would be appreciated.

2 Likes

I just tested this and can definitely replicate it. There seems to be a problem with GH_Cluster that does not dispose its internal document. I have logged this here and will get it fixed:

RH-81896 GH_Cluster that does not dispose its internal document when removed

3 Likes

I am experiencing the same issue on Rhino7, hopefully we can push this fix to the v7 branch as well.

2 Likes

Hi @eirannejad

Thank you for looking into this. Much appreciated!

-johan