How to render a 7 million polygon design...crash, freeze, End Task

My purpose is to import this design into a separate Rhino file which has the skyline of New York City with the buildings as boxes (already a 500 MB file) and maybe even render my building in the area of its completion.


Just render my building and take a screenshot of my building after importing it into the NYC file.

I have tried:

(1) Used Rhino’s render engine: it froze/crashed computer before rendering dialogue box appeared.
(2) Connect to D5 renderer whilst in Rhino 7 and 8: RAM shoots up to 99% and either Rhino crashes or the computer crashes or freezes.
(3) Eliminated materials/textures, reducing size of Rhino file (by 100 MB down to about 215 MB). Still massive RAM usage and freezing of Rhino; must End Task before crash.
(4) QuadRemesh: takes forever even when used on smaller sections and it does not seem to make the file smaller.
(5) ReduceMesh: definitely reduced the size of both the file and the number of polygons; I am concerned that this will degrade the quality of render and it takes an extremely long time to complete (amount of reduction: 1 million polygons, but not enough to render).
(6) Save as and Export to OBJ, FBX, STEP to take to a separate rendering engine. Only STEP file completed. Saving as OBJ and FBX, I allowed to run for over an hour but had to End Task because taking too long; pushing ESC key would not stop Rhino from continuing its task.
(7) Attempted to import 3DM file into D5 standalone renderer; I let it run for longer than an hour with nothing imported at the end.

I have thought about using ShrinkWrap or attempting to use Omniverse, Bella Render, or Twinmotion after reducing the polygon count.

Computer specs:
AMD Ryzen 9 7950X @5.2 GHz
64 GB DDR5 RAM @ 6000 MHz



(1) Does anyone have any idea as to what I can do to get this rendered?
(2) Do I need to just be patient and hope that one of the above methods will finish at some point without crashing the computer? If so, which method would you choose?
(3) Why does Task Manager show different amounts of RAM being used by Rhino, but total amount is still at approximately 99%?

Some, but wrong materials; full image:

No materials full image:

Original shot just before render and saving:

RAM to 98 percent and Rhino more than 47 GB:

RAM to 98 percent and Rhino more than 47 GB

RAM to 99 percent but Rhino barely registering:

RAM to 99 percent but Rhino barely registering

Tried using Snipping Tool to get error messages, but not enough RAM:

Error messages:

Before shunting it through more renderers, I’d try and isolate the problem first.

I think the first step would be to split off your largest ground-level cylinder section first, and see if you can get that to render first. Reduce your model to the minimal geometric requirement to see if there are obvious things you can get rid of, or if there are geometric snags causing the problem.

With polygons, I would also consider if there are individual elements that you can get rid of, if they won’t even be resolved at whatever your pixel scale is going to be. Perhaps you can render 2/3 of the building (visible from front), if there isn’t any problem with shadows or reflections.

1 Like

7 million polygons isn’t that big a scene, I accidentally rendered 30-million polygon scenes 10 years ago. There may be something “wrong” in your model. I can’t make out at all what any of those error messages are supposed to be saying. Or you’re just being impatient.

You need to look at the more detailed Resource Monitor settings to see how your RAM is actually being used. Remember that when you have lots of RAM Windows will allocate a bunch of it to a cache, but it doesn’t really affect programs, and make sure you have enough VIRTUAL MEMORY to cover the entire contents of your 4090’s RAM, for GPU rendering. I once had “out of memory” issues due to that on iRay.

1 Like

Hi @Edward_Sager ! Maybe it’s related to issues with imported block instances. Just to check an additional thing, does rhino select any objects if you run the SelBadObject command?

and what’s your GPU showing in the task manager for performance when you hit render?


1 Like

7 million is not bad. I would look at other things. Start removing parts until it starts to render and then we can get on to solving that problem.

  1. What resolution is the rendering set to. Try a small 880x600 rendering to see if that helps.
  2. Materials can have large texture bitmaps in them. Rendering without materials seems to run better?
  3. On some models like this there may be a block that is overly detailed. Mullion frames with the actual aluminum extrusion. Or a vent that shows up over and over that could be much simpler. Start turning off levels as recommended in the above messages to see when it starts rendering. Many times, I find that there is one block that is causing the trouble.
1 Like

@René_Corella (great to hear from you again!)
Thank you for your reply. I believe that I have isolated the problem: each window block is not just windows, but also a multilayer wall alongside the windows plus all parts of an aluminum frame around each window as well as connections allowing for a floor-to-ceiling curtain wall which includes such things as rubber gaskets and silicone sealant. Plus, there is a second facade connected to the inside curtain wall which includes a perforated screen and more glass attached with spider fittings/brackets.

I got a polygon count of more than 7 million just for one (1) block. There are some 2100+ blocks on the tower.

I turned off the layers that would likely not be seen in a rendering of an 810-foot-tall tower and I can now render.

Now that I know about the Resource Monitor, I will pay more attention to it.

The Virtual Memory is set to have it automatically managed and currently has allocated 15 GB. I have read to put this up to 1.5x the “total available memory” for the “Initial size” and 3x the “available memory” for the “Maximum size.” I assume that the “available memory” means RAM which would create an initial size of 96 GB and a max of 192 GB. @JimCarruthers, is this your take on the sizes as well or should I just be using the graphics memory (RTX 4090 = 24 GB) as the base for multiplication?

@René_Corella I checked for bad objects and received the nice news of “No bad objects selected.” The GPU usage, strangely, all I can remember is that it was close to 0% and it stayed at a cool temperature of 35-38 C so that the fans on the GPU never turned on. I also purged unused stuff and that eliminated 65 MB of file size.

I guess I should not be disappointed that not everything in the design can be rendered at the same time, although I was hoping that using blocks would have streamlined things to make it possible with my computer setup.

Again, thank you all for helping me narrow down the issues and get a good rendering.

1 Like

Good to read the bug has been caught - I was asking about the GPU kind of off-topic, wondering if you had it set to be in charge of ray-tracing (instead of CPU) in the ‘cycles’ section of Rhino options.


1 Like

it may or may not be feasible for your situation, but there is a tradeoff between the (memory) size of a block, and the size of geometry it contains, and it can be beneficial to avoid blocks with very little geometry per block

for example, say you have a tree, where you can choose a) to make a block containing a single triangle, and distribute 20k of these blocks, or b) create 20k triangles and join them into a single disjoint mesh

in case (a) the size of each instance will be larger than the triangle itself, since each instance has a name, material, transform, etc, and the case (b) approach can drastically reduce the amount of memory required, and the time required to transfer data into a viewport or renderer, or to move the tree around interactively, and so forth, both since the memory required is reduced, and because it is quicker to iterate/sample the single mesh than to navigate (programmatically) thousands of single-leaf instances

in this example, it may be best to find a compromise, say 20 instances of a block that contains a mesh with 1000 triangles, so you can put 20 slightly different materials on each, add some randomization of position for the instances, and so forth

of course you do not always have the choice, but I just mention this since people have often been given the impression that instance == memory efficient and it is not necessarily true, and given your description of the situation, it seems your model may fall suffer from this issue

1 Like

@René_Corella René, thank you for the reminder. I went in and saw that it was set to CUDA and the 4090 GPU, but I am wondering if Optix would be better with what I consider might be a complex scene. I will test it.

@jdhill Thank you for the explanation. Does this mean that I might have a more efficient model for rendering if I explode the blocks and create new blocks with fewer parts? I do use nested blocks, but I am unsure if that helps or hurts my situation.

Thank you both for your responses. This has been very educational!

yes that is what I mean, but what is feasible naturally depends on your workflow

if you have features that won’t be moving around or changing, then maybe grab all instances that use the same material, explode them all the way down, extract render meshes if the geometry is nurbs, and then join those meshes into one disjoint mesh

you can see, this is something you can do to varying degrees, depending on the model and workflow

1 Like