Rhino Out of Memory Crash running Wallacei simulation

Hi,

When running Wallacei, my Rhino always crashes with the memory allocation failed error after some time even though its a relatively small sample size of 200 iterations. However, my friend was able to run the same file for 1000 iterations with no issue. From what I observed in the img below, it seems that the committed memory is full right before this happens. Is this the issue and what can I do to solve this? Thanks.

Can you send your pc specs? What version of rhino/Wallacei are you using?

Hello Ngjyao, I also didn’t observe any issue if simply look at the UI screenshot. Have you tried the example we share with the plugin? If that one works fine, it might be the grasshopper geometry operation causing the memory leak, we need to repeat the issue you encountered in order to get it solved.

Can you share the GH definition file with us to do further investigation? And what else plugins you used together with Wallacei in this definition?

JY_3Tower_Forum.gh (257.1 KB) specs

Thanks for the responses, pc specs are in the image and im using Rhino 7 and i think 2.6 version of Wallacei? Using it together with Ladybug and Chromodoris. The example works fine but as mentioned I dont know why the current file can run on another computer but not mine even though I think the specs are sufficient.

Hi wondering if there is any help?

We are still debugging with the example you shared. I am not able to repeat the crash using Rhino 6 so far. The ram utilisation for Rhino is also stable at around 1~1.4GB. I will let you know if we manage to locate the issue or have any progress.

and can you also have a look at the post in the forum? someone seems to have the same error as you got. Check if you installed some Rhino plugins that cause the memory leak? (check Rhino plugin manager):

A memory allocation failed and rhino will close

Hi, I saw this post before and I have only 2 plugins, Vray and 1 other which I disabled. After your post, i disabled Vray as well and tried running Wallacei but it still failed. However, it seems like the memory leak could be the cause as my committed memory seen from the task manager keeps rising. Is there any other way to address this?

Here are some possible solutions, you can try them and see if these can address the issue:

  • Disable the Ladybug and run the simulation without it. if it works, means you need to reinstall the ladybug, you might not install it correctly.
  • Rhino 7 is quite new, there might be still some issues out there. Update to the latest Rhino 7 service release package and try again. If it is still the same, you might need to consider rolling back to Rhino6 for now, at least you need that subD function.
  • Test if there is any issue with your hardware by running ram test.

Just want to update, there wasn’t a problem with my hardware or ladybug installation. Seems the issue was on the ladybug simulation grid size. It was set at 2 and if I set it at a larger value (20), more iterations can be run (500). Not sure if this can be fixed as its a data storage issue?

Wallacei only manages two types of data:
1, sliders, genepools as control variables.
2, objectives as result for evaluation

All these data won’t be affected by the Ladybug simulation grid size, and the data are quite small in size. They are numerical values instead of geometries. We ran simulations with more than 1000 individuals per generation for more than 100 generation, and didn’t have an issue.

The memory leak issue is caused by the app didn’t manage the ram properly. If you already narrowed down the issue to the ladybug. I would suggest you to uninstall and do a clean install of all their packages. Hope this info could help.

1 Like