I just realised something working on a large cluster that I never noticed before. If I have all my components in Grasshopper without clusters then it only recalculates everything that is connected downstream. So even in a large definition if something is somewhat isolated it is clever enough to not recalculate the whole definition.
Now clustering just kind of makes sense for clarity, especially for larger patches.
But now if any input into the cluster changes the whole cluster is recalculated regardless of what part inside the cluster is actually affected.
How come? Why is the inside of a cluster not treated the same as a normal Grasshopper document? I think I might have to go back to some of our older huge definitions and use that new Unexplode function that was silently added. Its a shame that clusters cannot be used for organisation then, since now I need to consider that only completely connected things are in a cluster so as to keep performance high. I kind of feel like clusters were never intended to hold large parts of a definition.
Can you explain why this is the case @DavidRutten?
ps: on a connected side-note: It would be great to not just have clusters but have GH files inside GH files. Maybe that would be the differentiator to clusters. So I can save a large chunk as a separate file, then place that file inside another definition. Cluster inputs and outputs would work the same as inside a cluster. Then you could organise large files better and have the added bonus of being able to independently work on large parts of a definition. Maybe a lot to ask but I feel that’s how it should work. The same way a lot of other visual programming tools do it or indeed how Rhino works - I can place one 3dm file inside another as a linked block or instance. Most script languages let you import one file into another. That way clusters could still work as before (recalculate contents on input change), but placed files follow the same logic as normal GH documents (only recalculate downstream).