Ive seen this raised on other forums but sadly to no avail, and so i thought id try my luck here.
When creating a definition, I have noticed that the runtime of the definition differs significantly if you cluster it. Ive investigated this for a few days now and have come to the following conclusions (as useless as they may be!)
A definition becomes significantly slower if you cluster it.
The runtime of the cluster is not reflected by the runtime of the components within the cluster itself… let me explain: I created a definition and clustered it, as expected, the cluster was very slow. As it stands, the cluster takes 4.7 seconds to compute, however, If I enter the cluster, and evaluate the runtime of each component within the cluster (using the ‘bottleneck navigator’ as part of the ‘metahopper’ addon - this lists all the components on the canvas according to their computational time), the cumulative runtime for all the components within the cluster is 40ms!! This differs significantly from the cluster runtime of 4.7 seconds.
Finally, lets say you created a definition and you clustered it. And lets say that this cluster draws a 1000 points and then connects every four points with a polyline, giving you 250 curves. Now lets assume that one of the inputs to the cluster is a slider that selects one of these curves… so that everytime you move the slider, the cluster outputs a different curve… essentially, you have a ‘list item’ inside the cluster that chooses a curve from your list of 250 curves and outputs it. Should you have ever been in my position, then you know already know the issue im about to raise. If you move the slider that is inputted to the cluster, rather than simply listing the item you want to select, the cluster recomputes all of the components within and then outputs the selected curve. Essentially, the cluster always recomputes itself when you make ANY changes to any of its inputs. so if the cluster time was 4.7 seconds to compute, it takes another 4.7 seconds if you make any changes to the clusters inputs.
If you have experienced any of the above and have any solutions or insights into how one can resolve this, please let me know! any help would be greatly appreciated.
Ive run a few more tests and i think ive narrowed down the problem to issue number 1 and 2 raised above. in the cluster i originally created, I had two inputs, each with its own data… If i internalise the data INSIDE the cluster (so now the data is stored inside the cluster itself, therefore im not inputing data from outside the cluster) and delete the input, the cluster has gone down from 4.7 seconds to 50ms!.. which is much closer to what the actual runtime of the components in the cluster is. I dont think that the content of the input to the cluster itself matters significantly, i internalized each input separately and regardless which one i internalize, the runtime went down drastically.
Im guessing the issue is with the translation of data from the canvas to inside the cluster… perhaps the ‘cluster input’ paramater is what’s causing the issues?
ps: Im running these tests on rhino 5… Im still waiting for my rhino 6 licence to be emailed to me and ill run the same tests in rhino 6 and will report my findings.