Same issue here. I am working with user objects attributes and c# components inside cluster. It’s not being updated after recalculating GH (pressing F5). I have to open the cluster in order to get updated.
Same issue here.
I am seeing the same thing. I’ve found that if I disable and then re-enable the cluster it recomputes.
This seems to happen only when I change an upstream Boolean Toggle from a script. If I toggle the Boolean manually, every cluster downstream from that recalculates correctly.
(My script does not do any expiring or recomputing.)
I am also getting the same error - unreliable cluster updating that even enable/renable on the cluster does not fix.
Lost a good number of hours on this - is there any acknowledgement of this error? Plan forward?
Same thing. Trying something with Metahopper and Hops did not work out as well…
Hopefully Rutten/McNeel will take up this problem
I think this is a “feature” rather than a bug. It used to be that clusters would always re-execute all of their internal contents on every execution, which would often result in unnecessary recalculation. Now, (if I am not mistaken) it traces through and only executes “expired” components whose direct upstream inputs have changed. If your cluster contains components that do not have an input from outside the cluster, they will not recompute when the cluster’s inputs changed.
You can see where @DavidRutten describes this change here: Cluster runtime very slow - #7 by DavidRutten, with more details here: https://mcneel.myjetbrains.com/youtrack/issue/RH-44500
I remember too that there was an update, but now I often have small clusters that don’t update correctly. I made a small one that replaces Nulls in a data tree, but that often doesn’t update correctly, defeating the whole purpose of the cluster. I wonder if a change to Null is not considered an update or something. But yeah, now that we have hops I would rather that clusters work the old (reliable) way and for anything large you want to cluster hops make more sense in my opinion, but I have not tried them very much.
Because of the whole clusters being slow in the past and now being somewhat unreliable I have actually gone away from using clusters and spending a bit more time in organising patches so they are still readable even with very large patches and I have to say that is a much better way of working personally, because it is easier to see connections between different logical parts.
Ok, I understand everything, but if a cluster refers to objects in the rhino document or even to the camera for example, and these are modified, and in addition I force the calculation with F5 I expect the clusters to recalculate.
I just started moving some R5 grasshopper flows to R7, and I am noticing the same as above: certain clusters do not process their newly changed inputs and still output the old data, even after a dedicated recompute. The only solution now is to open the cluster and close it again.
What is the current state of this problem? Is there a better solution than manually opening/closing all the clusters every time?
This is a quite a big dealbreaker!
I also find this problem today.
I have to manually recompute every time i made a change in the model. very inconvenient.
Eight months on and no solution in sight? This is a CRITICAL BUG that deserves the very highest priority. WTF?
Do you have a sample that you can share that exhibits this bug? It sounds like it is related to referenced objects changing in the Rhino document.
What about references to Data Input (geometry and numeric values) that are changed by another GH file? The fact that it’s intermittent makes it all the more difficult to deal with, making it very difficult to create an example that demonstrates the bug. But eight months of reporting what we used to call a “blocker” bug like this is evidence that the serious problem persists, causing havoc, grief and wasted time. Just giving up and not using clusters at all is not an option!
I’m only guessing at the cause of the bug. Once we have a sample that exhibits the problem we can evaluate what the true cause is.
So the misery may continue for years because McNeel QA can’t be bothered to take seriously the eight months of reports (so far) and investigate the issue.
I believe these reports and saw the issue myself yesterday but can’t reproduce it at will. I have half a dozen separate GH models all sharing Data Input and Data Out components, far too complex to share, and when this bug strikes the GH is just broken for no good reason. Please stop the pain!
P.S. Apparently this bug was introduced in an effort to increase performance by minimizing recalculation of clusters. I assure you that the damage done by this bug is far worse than the issue it’s trying to solve. So can the so-called “improvement” be disabled or optional?
Are these valid examples of problems with the clusters?
clusterhack.gh (8.8 KB)
Thanks, I’ll see if I can repeat this on my computer
Any updates on this?
cluster_not_working.gh (12.3 KB)
Here is another example file.
It appears the cluster updates when I really change grasshopper sliders/panels/numbers, but when I update a file (that is read to get some input) and press F5, a cluster is not updated.
This is really weird behaviour; pressing F5, which says ‘recalculate’ should just recalculate the whole document. Please fix this - or make the performance gain optional…
Please fix this cluster problem as soon as possible. It’s impossible to work with them at the moment.
Impossible is a strong word. Something is amiss but so far the conditions where it fails are elusive. I’m guessing it fails to detect external changes in certain cases. For me, it might be Data Input. Do you have something “unusual” feeding the cluster?