We have big grasshopper files with very complex definitions inside
I’d like to use cluster to simplify the code and maybe create my own collection of cluster to be used in differents SD files
I have heard that cluster can impact performance on Shapediver platform?
Do you have any tips/recommandations about cluster?
Is there a way to explode all clusters in one clic before importing gh file into shapediver ?
Or, is it possible to import the GH file and his clusters ?
Clusters affect performance when opening the GH file, not when running it. This also happens locally, so it is not ShapeDiver-related.
Regarding a solution, could you split your GH file into smaller files that work in parallel? If that is the case, you can use the new instances feature, where you can run multiple GH files in ShapeDiver.
This feature is in beta testing, but you can try it out. I am attaching a GH file with a simple explanation of how it works. This GH file is what we call the controller, and it is in charge of controlling other GH files to which you can send data:
Using instances would be my main recommendation if your GH file has thousands of components. However, you can also use the C# script below to explode all clusters in your model automatically. If your file has many clusters, it may take a while to explode all of them:
@olivia3 the script was developed in the new script component of Rhino 8. You will have to adapt it to the old Rhino 7 script component. I am attaching the raw code: