Rendering with Rhino.Compute on AWS?

I have a self deployed RhinoCompute instance on a windows server in AWS, with custom endpoints exposed via a plugin. I need the ability to generate static 2d renders of a model by loading up the model, placing a camera at known locations / directions, and take various 2d pictures of the scene with control over projection, lighting, potentially any of the things which can be controlled when using Rhino interactively. I’m fine with these renders being CPU generated for now, or taking a few seconds, it’s more of a batch processing scenario than a real time request/response.

Is this at all possible with stock Rhino tools? If not, is there a 3rd party plugin/lib that could do it? It would be extremely convenient to just let my custom RhinoCompute plugin code snap these pictures in headless mode as part of a larger processing pipeline.

If the answer is “no” to all of the above, can anyone reccommend some other (free / open source?) server system which would let me perhaps send over a .GLB file and get back a 2d render from whatever cameras are defined in the GLB?

Any general directions to look or specific systems to eval would be super appreciated. Thanks!

To my knowledge, this hasn’t been tested/created yet. I do know that trying to use the ViewCaptureToFile or something like that would fail in Rhino.Compute as there isn’t any viewport or screen, etc in a headless environment. Your best bet is to script this yourself in the method you created for your custom endpoint. As you said, you’d have to essentially add cameras, lights, etc. to your scene and then script the rendering process. Once the image has been created in memory, you could then serialize it as a base64 string and output it from Rhino.Compute. You could reconstitute your image by converting the string back to an image on the client side. Again, to my knowledge, no one has tried this (although several people have brought it up as a wish list feature).

I want to second what @AndyPayne mentioned, I don’t think anyone at McNeel has tried this. That being said, I believe you should be able to automate running the Render command and getting back the image as this does not rely on the Rhino viewports. It is worth a try.

The _Render command pops up a window. I don’t know if that will work when running Rhino headless.

1 Like

I asked the same a few months ago and got this response from @stevebaer:

We’ve been trying to achieve the same by using an AWS Lambda function to:

  1. Start a headless version of chrome
  2. Load a threejs scene with given camera settings
  3. Call rhino.compute and plug the geometry to the scene
  4. Snap images of the viewport fom different angles
  5. Return the images from the lambda as png/jpg etc.

Too early to share any results, but it looks promising

Hey Emil, thanks for the link over to your other thread, that’s a really interesting project! Your approach of leveraging chrome + threejs is totally clever; we currently use three elsewhere in our stack, and using it server side would help us ensure parity between client UI visuals and server generated snapshots.

In my case, we area already creating GLTF (GLB) renders of our geometry content, and down the road will want these 2d renders of 3d content to have increasing realism features and possibly pulling in other assets that are more from the “pure 3d graphics” domain than CAD sourced. This is leading me towards looking at just using Blender command line execution (and probably a bit of python), passing in a GLB with camera(s) already placed and let Blender do it’s Blendery magic, letting us add in any funky shaders / lighting / etc for custom effects.

There are some neat examples out there of wrapping a blender command line execution in an AWS Lambda function, such as in 3d animations where the lambda takes as input the 3d scene, target h/w of frames, and the animation time T in whatever FPS they need. So each frame is it’s own lambda call and with just a button press they can generate a very impressive AWS bill :slight_smile: