I’m having an issue running a local rhino.compute server which appears to have some sort of memory leak. Let me know if I’m doing something wrong:
I spin up the local server by running the following on the command line from my hops rhino.compute folder:
./rhino.compute.exe --port 81
Then I am using a python script to train a model where the reward is calculated via two separate grasshopper files, the reward function call is as follows:
@AndyPayne I wonder if I can run an IIS locally and whether that might resolve the issue, otherwise is there a command I can send the server to have it release the memory?
I’m not entirely sure how to advise you here. We do not have a method to kill a child process and restart a new one. You could try adding the argument --childcount 4 (or some other number greater than one) to your rhino.compute startup code. You can read more about what this argument does, but essentially this will tell rhino.compute.exe to spawn more than one child process (i.e. compute.geometry.exe). If one of those child processes fails/crashes, then it should be able to continue by sending the requests to one of the other child processes. Rhino.compute.exe should then try to spawn another child process back up until their are the number of children specified in the startup arguments. That doesn’t necessarily fix the memory leak issue, but perhaps it will keep you running?
Another thing you might check is the JSON schema that you’re sending to rhino.compute. There is a property called CacheSolve which essentially tells the server to store the results of each solve in memory. If the same input is then sent to the server, the server will check to see if this has already been calculated and will simply return the stored results rather than recalculating everything. This can help speed things up, but perhaps the cached results are filling up the memory and that’s what’s causing it to crash. Perhaps you can explicitly set this value to false in each of your requests to make sure that the results of that particular request are not stored in memory.
Lastly (and I don’t think this is really the issue), you may want to check C:\Users\yourname\AppData\Roaming\McNeel\rhino.compute\definitioncache. This is where rhino.compute stores some of the cached information. You may check this directory while your program is running to see if this is somehow filling up. Does this help?
Thanks Andy, I tried with --childcount 4 but the same problem arises. In debug mode I can see that the JSON schema is using the defail cachesolve=False and the definitioncache doesn’t seem to be filling up.
I will clone the rhino.compute repo and see if using it the recommended way makes a difference. I suppose that rhino.compute.exe might be inheriting some memory leak from elsewhere which I want to investigate.
@AndyPayne I was able to get the compute server running in VS, I updated all the dependencies I could but still it seems to be holding memory with every solve call, the garbage collector is working but can’t keep up, see snapshot below. Any pointers with where to search would be great.
aaah ok, I added <ServerGarbageCollection>false</ServerGarvageCollection> to the project solution to use workstation garbage collection and it runs much more nicely on my machine, although still climbing - I think there might be something I can do in the http handling to either dispose or reuse the HttpClient, based on advice here Memory management and patterns in ASP.NET Core | Microsoft Learn
Alright, for anybody that’s interested I’ve managed to reduce the memory load on the workstation significantly - probably it’s only useful for debugging, but the code can be found here: GitHub - s7uvx/compute.rhino3d at local_compute
I can take a crack, I will say that there was some assistance from Claude Code, so someone who is better at C# should probably take a close look. In general the changes are as follows, you can see the diff log for more detail.
Added <ServerGarbageCollection>false</ServerGarvageCollection> to the solution.
Used a HttpClientFactory instead of System.Net.WebClient for client.
Implemented cache limits and enforced garbage collection (probably not a good idea and I’m not sure this helps very much anyway).
Made the GrasshopperDefinition IDisposeable with cleanup watchers.
Added dispose method to GrasshopperDefinition.
I think there are a few of the changes that aren’t doing anything, but it’s gone from running out of memory after ~300 calls to being able to complete 7400 calls (after which the training run ended and I haven’t tried more than that).
Sure. We would definitely vet/test any changes that end up in the main branch… but I am curious as to what you ended up changing to fix the memory leak issue.
Sorry Andy, been busy with school … I can’t put my finger on any one specific thing, it seems to have been a combination of using WorkstationGarbageCollection, the HttpClientFactory and disposing the grasshopper definition.