Status of Rhino.Compute

I was trying to find out if Rhino.Compute is still being actively worked on. We built an internal Python application which is using Rhino.Compute to process grasshopper workflows for automating a custom manufacturing process that runs continuously. Since our design engineers use Rhino, we don’t have any alternatives to automation other than API interfacing with Grasshopper which this or the NodeJS version providing the framework. Both projects seem to have very little development activity.

I have reported some memory bugs back in March in the GitHub issues list but I haven’t seen any feedback or responses on that in the last 6 months.

All of these projects are still actively being worked on. It is going to be hard to reproduce the issue that you are seeing with your custom setup, so I’m not exactly sure how we can fix this.

You may want to consider using our new Rhino.Compute architecture instead of directly launching compute.geometry.exe. This will bring up and take down compute.geometry.exe processes based on demand. This will also save money if you have periods of time when compute.geometry does not need to be running.

We are using the newer setup with IIS application pool. What was happening is the Rhino.Compute would only spawn the compute.geometry process when it received a web request. Then it was taking 45 seconds to start up and the application I wrote was timing out. It was also spawning up 4 processes but then only ever using one of the 4 processes until it ran out of memory. I changed the startup with the flags childcount = 1 and spawn-on-startup = True but it still doesn’t start until it receives a request.

In our use case we are sending up web request with a 16MB GHX script, a 5MB STL file and a small CSV to the grasshopper endpoint as input and receiving a few hundred scan points x/y/x coordinates back and a small comma separated string values. After about 7-8 files, the working set memory on the compute.geometry service is in the 10GB range. If I let it go for 10 minutes, the windows server running the compute.geometry service will stop responding (running out of memory). So, I started manually recycling the process in my code when the working set memory for the process reaches 10GB. I am also manually killing the compute.geometry process when I don’t have any more incoming file to process. When new files show up to process, I am manually hitting the healthcheck endpoint with a web request which cause a compute.geomety process to spawn, then I am sleeping for 45 seconds to wait for the startup to complete.

We are basically in a 10 minute window of processing when we are running out of RAM. I can keep increasing the RAM in the host server, but that only gets me a little more time. So the other capabilities of the IIS app pool spinning up and recycling those processes is not helping me in this situation.

So, really I am using the IIS config as normal, just with the 1 child process since that was all I was able to use. Then I am calling the healthcheck endpoint manually and sleeping for 45 seconds to wait for the startup so my actual web request which is processing data doesn’t timeout. The only thing outside of the normal I am doing is checking the working set memory for the compute.geometry process and killing that process when it get to 10GB, then calling the healthcheck which fires up a new process. This lets me get up to 10 files processed before I have to recycle the process and wait. I can manually watch the compute.geometry process and it never releases any memory while its running. If the normal use case is not causing this process to grow its working set very big, then maybe its not noticeable to the average use case.

I am not sure how the application pool is deciding which compute geometry process to use when it spawns up 4, is that by the host making the request? It doesn’t seem to round robin the incoming requests in my scenario. Ideally, the requests would load balance across these processes and then the startup of new process would not impact the processing speed since the next request could use another instance of the compute geometry that’s already running while another instance is restarted. I assume this is the intention of the design, it just doesn’t seem to be working that way.

I am using the latest Rhino Compute files from January 2022 and I do have it running on more than one system and I get the same results, so the behavior is not unique to one installed instance.


That seems pretty extreme. Are there plugins installed for Rhino on these running instances of compute?

Is this all getting embedded into a single large JSON payload in the request?

We have the following plugins: Galapagos,
Kangaroo2 Components 2.5.3, Heteroptera, Human UI 0.8.8, MetaHopper 1.2.4 and Pancake
They are not all being used in the scripts we are running. The Human UI and Heteroptera components cause errors when using Rhino.Compute so those get stripped out of the production grasshopper scripts we call programmatically. The Human UI components were just used by the design engineers to manually run the script when they are designing changes or manually running files through grasshopper.

I do send all that data in one large JSON payload.

We are planning to look into using the Node/JS version in the future which would alleviate sending the grasshopper script up and make it more feasible to so some parallel processing.

We looked today at some of the code in GitHub for the RhinoCompute and it looks like the code will always use the first instance of compute geometry unless the first instance is busy, then it will send to additional instances of compute.geometry. I am pretty much doing everything serial at the moment, so I don’t sent the next JSON bundle up to the app server until I completed processing the previous one, so in this scenario I would never be doing a round robin. That answers my question about why additional compute.geomentry instances are never used in my setup.

This might be one of the reasons compute is blowing up with regards to memory consumption. Compute attempts to cache the grasshopper definition and returns a hash that can be used in future calls so a new definition isn’t loaded into memory for every call and to reduce the amount of data needed to be sent to compute.

Hops does this in its GetRemoteDescription function call. This is the initial call to compute to determine a definition’s input and output schema. First the definition is packed up into a base64 encoded string and sent to compute

Compute returns the schema defining the inputs and outputs along with a “cachekey”.

This cachekey is then sent to compute instead of the entire grasshopper definition in future solve calls to compute. The cachekey is about 36 characters long instead of the full definition getting passed to compute again.

The node.js AppServer sample that we have also does some similar optimizations in order to make it so the definition doesn’t have to constantly be sent to the server.

Thanks for that info, we will start looking into implementing the cache key.

Does the cache key work with the grasshopper endpoint or would we need to use the IO endpoint? I didn’t see any parameter options in the grasshopper endpoint code to send a cache key. Also, can we use the IO endpoint in the same way we are using the grasshopper endpoint even if we are not using the HOPS control in the grasshopper scripts? I have not looked into the HOPS functionality very much since we are not using the HOPS objects in the grasshopper script so I was thinking it was linked those objects vs being accessible via a separate python application.

Yes, the endpoints in Rhino.Compute can be called by any client.