Memory allocation error when animating slider

I wasn’t sure this error still exists but I animated a relatively complex GH file over night and got that memory allocation error. Apparently rhino just fills up the RAM with every animated frame without purging it.

Has anyone ever solved this?

Actually, I take half of that back. It seems to fill up memory rather quickly and then just stay at whatever it is the maximum allocation. The crash is more random, it seems.

Although you are probably not directly programming, you are still in charge in checking if you are not creating too many object instances. If you create a lot of geometry, you actually and obviously allocating memory indirectly. Every curve and every point are objects which do live for a certain live-time.
Since grasshopper is a memory managed software, it may not deallocating fast enough, and this may break your system. But this question remains quite vague, because it is hard to reproduce such errors. Without decent debugging its impossible to tell whats the issue, so there is probably no other solution to your problem, then reducing the amount of geometry.

I understand that I’m creating a lot of instances and these problems only occur if I do an animated slider with many hundred frames and many hundred animated objects, and on top of that with rendered view on.

I’ve had some crashes in the last few days where I could submit an error message to McNeel, so maybe someone will pick up on it? In one I even mentioned this forum thread :slight_smile:

For now, I just restart the animation and stitch the frames together later. Not the biggest inconvenience, but it keeps me from preparing an animation with a few thousand frames to run over the weekend, unfortunately…

UPDATE:

I’ve recently updated to 48gb of RAM and wanted to see if that solves the problem. It doesn’t

It’s really not about what is in the GH definition, but about the size of it and the amount of geometry being visualised in each step of the slider animation.

Even with 48gb the symptoms are the same: The slider animation produces images fast and efficiently but fills up the available memory. After a few hundred slides memory is full, and things start slowing down drastically. I’m not sure how Rhino is handling this situation but it keeps going like this for quite a while, and then at some point it crashes. Sometimes Rhino just disappears, sometimes there is a Windows error, sometimes I get the Rhino crash window. Pretty random.

SLIDER ANIMATION has always been odd. The fact that it’s almost impossible to cancel because Rhino is SO busy recalculating the entire definition over and over again is understandable but still weird. In my case it also becomes evident that Grasshopper is not purging the used memory but instead fills it up until it crashes. I wonder if this is something that could be updated at some point.

Are you using any scripts within your definition? If so can you share them?

Unless you model the Moon that’s a lot of memory. Without a “typical” test case (and GPU info) is impossible to tell what’s wrong (or what a Plan B may be).

The operating system and individual applications reserve chunks of the generally available RAM for specific things. There’s a couple of bytes here reserved for call stacks, a few more there for GDI handles, and a few over there for the current date (remember Y2K?). Exactly how many bytes will depend on the OS version, the hardware specifics, and even user settings.

When you’re getting out-of-memory errors it in all likelihood means that one of these small pools of bytes has run out. It has very little to do with the actual amount of RAM. It could be that as well, you could be creating a huge amount of data which is never allowed to be cleared up, or maybe there’s a memory leak bug which means the application slowly runs out of it’s allowed total pool of RAM, but in my experience it’s usually the former.

So this indicates that the memory isn’t being released after it is no longer needed. That definitely sounds like a bug. However “things slowing down” is symptomatic of paging which is what happens when the RAM runs out and Windows starts using the hard-drive as overflow backup. Drives (even SSDs) are much, much slower than RAM, hence the impact on speed.

An out of memory error however can happen for completely unrelated reasons. In this case, my bet would have been on the GDI handle stack. If the images aren’t getting cleared up, it will not only start gumming up the working memory but also that small part set aside for keeping track of graphics objects. However I’ve just tested this by running a 5000 frame slider animation and the memory use during this time was pretty constant between 8GB and 8.1GB. I didn’t experience a slow-down, or a crash, or unchecked memory growth.

So my next best guess is that data which is generated inside components for every animation frame is failing to get cleared up. You’re probably using a component in there with a bug, which is why I’d need the file.

Thank you for your answers Tom and Peter, and thanks for diagnosing David.

I have to add that it seemed last time I was running an animation it did not actually slow down. For quite a while it went up until there was about 500mb of memory left, then that went back to a few gb, then back to just a few hundred mb, until at some point it crashed.

It sounds like your analysis is correct, David. And to also answer Tom’s question, I am using custom scripts and quite a few plugins (such as Human for custom textures, Human UI, Elefront, and ICD’s Virtual Robot). It’s possible that one or many of them do not clear the data that is generated. I cannot share the definition but I can try and see if the animation runs better by selectively deactivating those components.

What’s also interesting is that once the animation is finished the memory still isn’t cleared up. I haven’t tested if that changes once I manually update the definition afterwards because I usually close Rhino as quickly as possible, restart, format C:, wash the computer, and go outside for a walk.

1 Like

I’ve seen really bad scripts regarding robots, written by people having only limited knowledge on what they are doing. A common source of memory error is instanciating objects which implement the IDisposable Interface but are not Disposed after usage, or accessing unmanaged code with similar failures… Can you eliminate these and test in similar fashion again?

BTW: Old but good:

https://blogs.msdn.microsoft.com/oldnewthing/20100809-00/?p=13203

1 Like

Thank you again for the help. A few more comments as I was trying to find the culprit here:

  • The robot plugin is from our ICD friend and colleague Long Nguyen, and it’s a well written plugin. It also is not the cause of the problem.
  • The custom preview components are also not the cause
  • Elefront, Human and Human UI were deactivated and it still crashed

The only thing left were some small Python components that ran through my tree structures and did minor operations. Not sure if that could have been the problem. But when I simply made a LOT of boxes with a standard preview / rhino texture in rendered view, it did indeed not crash and not build up a lot of memory usage. So the culprit might either actually be those Python components or simply the size of the definition.

I don’t have enough time to look into it right now, but I’m sure I’ll get back to it at some point.

That is odd arguing…
I’m not saying someone is a bad programmer (we all are :slight_smile: ), and Long Nguyen made great tutorials. But I for myself would never exclude my own code from being the problem, eventhough I’m working on software projects with a high demand on functional safety. And if not somebody else is encountering such issues, then its very likely a problem of my own code base, isn’t it?!
Learning never ends and to be honest with you guys. Academical research, in allen Ehren, but you guys really are at the beginning of a professional career. You should ask someone having 2 decades of experience in software development if your plugin is well written :stuck_out_tongue:

Sorry, Tom, I felt the need to defend my colleague here. Of course, no one’s code is perfect.

In any case, what I wanted to say is that I checked all plugins and none of them seem to be the issue - so it is, indeed, most likely, my fault.

I am experiencing the same exact issue, I just wanted to record the rotation of an object in grasshopper, but apparently at every frame gh is filling up a little bit of memory until rhino crashes and closes if the animation hasn’t finished before. Am I doing something wrong?

I have the same issue after a few changes of a slider connected to a move component. I found that the custom preview component blowing up memory. When I disable this component everything is ok.

Is one of your viewports in rendered, raytraced or Arctic mode?

Yes, I use it for an animation. I animate the slider and I have an output from rendered view.

I got the same issue, too.
My workaround is to use Custom material and render it in wireframe view, I don’t know why but somehow the custom material can be rendered in wireframe view.