Wish: Only recalculate downstream inside clusters

I just realised something working on a large cluster that I never noticed before. If I have all my components in Grasshopper without clusters then it only recalculates everything that is connected downstream. So even in a large definition if something is somewhat isolated it is clever enough to not recalculate the whole definition.

Now clustering just kind of makes sense for clarity, especially for larger patches.

But now if any input into the cluster changes the whole cluster is recalculated regardless of what part inside the cluster is actually affected.

How come? Why is the inside of a cluster not treated the same as a normal Grasshopper document? I think I might have to go back to some of our older huge definitions and use that new Unexplode function that was silently added. Its a shame that clusters cannot be used for organisation then, since now I need to consider that only completely connected things are in a cluster so as to keep performance high. I kind of feel like clusters were never intended to hold large parts of a definition.

Can you explain why this is the case @DavidRutten?

ps: on a connected side-note: It would be great to not just have clusters but have GH files inside GH files. Maybe that would be the differentiator to clusters. So I can save a large chunk as a separate file, then place that file inside another definition. Cluster inputs and outputs would work the same as inside a cluster. Then you could organise large files better and have the added bonus of being able to independently work on large parts of a definition. Maybe a lot to ask but I feel that’s how it should work. The same way a lot of other visual programming tools do it or indeed how Rhino works - I can place one 3dm file inside another as a linked block or instance. Most script languages let you import one file into another. That way clusters could still work as before (recalculate contents on input change), but placed files follow the same logic as normal GH documents (only recalculate downstream).

2 Likes

I’m working on something big and I’ve encountered the same problem.
A single big cluster take all the inputs and outputs a brep (a cam).
Then i have some toggle to flip/reverse its orientation or something similar: some fast, simple operations before the final outputting.
And… it recalculate the whole cluster!

You may want to try V7 WIP. There is a Cluster performance boost as of April 28th: Cluster runtime very slow

1 Like

Yes, it works as supposed on Rh7 WIP … but i have to deliver to my customers for Rhino 6 :confused:

:man_shrugging:t4:

If it is make it break for the project, anyone that can run rhino 6 can run rhino 7 today.

I know, but what about the day Rhino 7 is sold?
I can’t work on rhino WIP, I am not able to track all the differences and build a script that also work reliably on rhino 6. The day WIP is closed my script might would run only on 6 and my customer could find himself with my script suddenly not working.
I can’t foresee every possibility.
I can’t possibly advise him to use it on WIP, it would backfire to me sooner or later.

Hi,You can try this!
The SPPED_CLUSTER for V7&V6&V5 only recaculate downstream inside clusters!
SPEED_CLUSTER.gha (59 KB)