Why does each grasshopper recompute take longer than the last?

I have a huge GH1 definition with many elements and clusters within clusters.

At document load it will compute within 20 seconds or so, but recomputing takes longer and longer - quickly surpassing 200 seconds.

Are there any steps short of restarting Rhino to get back to the quicker compute times?

I have found great resources on optimizing your definition, but nothing which addresses this phenomenon of an unchanged definition taking longer with each compute cycle.

Thanks to a personal message I’m updating this post to clarify that there are no third party plugins or custom scripts involved with this performance issue.

The GH1 definition generates custom electric guitar & bass models. It is huge with well over 10 000 components in total and is organized into clusters based on the physical parts of an electric guitar: body, neck, fretboard, etc.

I am aware of some good optimization strategies to reduce compute time and will start to apply those as time allows, but none of those strategies explain why a definition would take longer and longer to re-compute.

I am very curious to know if anyone else has had this performance degradation experience with Grasshopper and what might be the cause.

Have you looked at the memory usage at start when it is faster vs later when it is slower? My computer is prone to filling memory due to Edge, Rhino and other apps taking up more and more memory over time. This eventually slows everything down. Use the Task Master (or equiv on Mac) to monitor your Memory and CPU usage. Memory leaks from your definitions would exacerbate this problem.

1 Like

to narrow down the problem, you might want to use data output / input and separate the complex definition into smaller parts.

just for debugging you can split the definition by adding parameters and internalise the data up to a certain step.