Recurring problem: Grasshopper Out Of Memory


#1

After working for anywhere from 15minutes to a hour on my large Grasshopper definition in Rhino6 I am always getting a Grasshopper out of memory warning, yet checking the Task Manager shows that Rhino is not using very much RAM at all:

I have 20GB RAM in this machine so I’m wondering if memory usage is spiking but the garbage collection is fixing it before I can see anything in Task Manager?

The definition currently has 5100 components and the only third party components being used are about 500 instances of Telepathy Senders and Receivers.

If I ignore the warning I find that I can usually continue working for quite a while, but I thought I should post about this issue because there is something not right going on with GH.


(David Rutten) #2

The keyword there is large. GDI+ doesn’t like drawing certain things very far away from the origin and it will throw those memory errors when it hiccups. The problem is not that you’re out of memory in toto, but rather that the amount of memory specifically reserved for certain drawing operations is exceeded.

This is a bug deep down in GDI+ and all I can do at this point is catch it and stop all of Rhino from crashing. All you can do is try and shrink your files so they do not contain components and wires far away from (0,0). Perhaps a good time to investigate the new Data Input and Data Output components?

Sorry for not being much help.


#3

Ah ok that’s actually quite helpful to know.

I’m in the final stages of a huge code cleanup for this definition by applying namespace practice to all components and their inputs/outputs and using Telepathy for all situations where outputs are connected to more than one input.

In this process I’ve been spreading out the components on the canvas, then grouping them carefully and organizing them back together. So I can see why I’ve been encountering this issue more frequently.

I will have a much more neatly organized canvas at the end of this process and I’ll see if the memory glitch still happens then.


(Miguel Villegas Ballesta) #4

Another GOOD reason for keeping our definitions TIDY.
The other is that the only good looking spaghetti messes are the ones served on a dish with sauce…


#5

I’ve also encountered this problem in the past when dealing with larger definitons but have always used View > reset canvas to work around it…

Does this mean that you should center your definitions around the starting area of the canvas? I’ve always looked at it like a frame where the starting area represents the top left corner…


(David Rutten) #6

The canvas is designed to look as though you’re only supposed to use the lower-right quadrant. And it’s usually fine to do so, you have to get pretty far away before GDI chokes, but maybe when it starts doing so moving everything on top of (0,0) may help. I do not know.


(Michael Pryor) #7

5100 components… :grimacing:

That seems really excessive for one definition. Might be time to start breaking it up into logical processes of their own gh sub definitions or making each process its own cluster. Is it really necessary to use all 5100 in one shot? How good are you with lists and optimization? Maybe you are copy/pasting too many things that can be better optimized by data trees? Also, look into Gh 1 data input and output components so you can bridge between multiple sub definitions.


(Tom) #8

5100 components?:


Plug multiple values into the input of a component simultaneously
#9

Thanks very much for your suggestions Michael I appreciate your thoughtful input.

The definition is pretty well optimized already. It’s just an extremely detailed parametric model which generates electric guitars & basses according to the specs I input.

Most of the Grasshopper code is relatively simple, but the amount of interdependence between each element of the instrument is very very high so it is really nice to be able to see all of the code on one canvas.

I will definitely check out the new data input & output components when I have a chance.


(Michael Pryor) #10

Well if there is indeed a problem with memory directly related to amount of component on canvas but you still like all the definition together, maybe start grouping parts of the definition into clusters. Say each cluster has 100 components in it, then you will have only 51 components (clusters) on the canvas.