It looks like memory is not freed from Grasshopper cluster.
If I generate some points in a grasshopper definition and run it 100 times the RAM used by Rhino is 1.3gb
if I put the same component in a cluster the RAM being reserved by Rhino is 26.7gb
I’m looping using the grasshopper player like this:
from Rhino import RhinoApp
for i in range(100):
RhinoApp.RunScript("-GrasshopperPlayer /Users/johanpedersen/Desktop/test.gh",False)
Yes the loop run in Rhino.
This is the file. file_to_run.gh (7.9 KB)
import Rhino
# This file is run from script editor in Rhino
for i in range(50):
Rhino.RhinoApp.RunScript("-GrasshopperPlayer /Users/johanpedersen/Desktop/file_to_run.gh", False)
I still think this is an issue. In the code in the previous post, I’m not using a headless doc.
I tried the code below but it doesn’t work. I am unsure if I’m doing it wrong or if the grasshopper player is not working with a headless doc?
Regardless I understand the grasshopper player to be a tool that is meant to be used while modelling in Rhino, to do some complex geometry manipulation. If RAM is not freed after this operation I think there is an issue.
import Rhino
for i in range(5):
doc = Rhino.RhinoDoc.CreateHeadless(None)
opened = doc is not None
if opened:
doc_id = doc.RuntimeSerialNumber
Rhino.RhinoApp.RunScript(doc_id, "-GrasshopperPlayer /Users/johanpedersen/Desktop/RhinoForum/clusters/file_to_run.gh", False)
doc.Dispose()
To be a bit more precise on the issue. I tried this.
The two gh files hold the same SquareGrid component but one of them is inside of a cluster.
When I run the file where the component is not in a cluster the memory use seems relatively stable. When I run the one where the compoent is inside of a cluster the memory used grows steadily. And by quite a lot.
I am also interested in knowing about this, as it could perhaps be one of the reason I am encountering overall poor performance in a script with a lot of clusters. Sometimes the RAM gets filled up to 100%, sometimes not, with the same input. Usually when it happens, killing Rhino and re-opening it seems to fix the issue. I feel like there is something unstable with those clusters, any support there would be appreciated.
I just tested this and can definitely replicate it. There seems to be a problem with GH_Cluster that does not dispose its internal document. I have logged this here and will get it fixed:
RH-81896 GH_Cluster that does not dispose its internal document when removed