Wish: Add an optional secondary coarse LOD for the NURBS models

Since I work with scenes consisting mostly hundreds and thousands of objects, and want to see the majority of them on-screen together, it’s not feasible to often hide them in countless layers to keep the viewport performance in stable framerate. What about implementing an optional secondary coarse LOD (level of detail) for the objects that will switch based on the distance of that object to the camera (that distance should be customizable)? I’m willing to wait a bit more for the initial calculation of the render mesh so that each object has two different LODs: one for close-up inspection and work, and a secondary one that activates once the object gets far away from the camera. I noticed that objects with tens of thousands of polygons still take huge resources even if they are very far from the camera and just about 1 pixel small. Video games solve that problem by using LOD for each object, thus they are able to render multiple times more object with a very minor loss of visual quality to the distant objects (the latter don’t need to be fully detailed anyway).

For example, instead of having 1000 balls in a row, with 10 000 polygons each (or 10 000 000 total polygons that burden the graphics card), depending on the camera angle and distance to each ball, the viewport may render 10 of those balls with their full polygon count (10 000 polygons each) while the rest 990 balls will be rendered in a coarse LOD (such like 1000 polygons each). That makes for a total of 1 090 000 polygons or nearly 10 times less geometry, hence a much smoother framerate and better user experience.

Some Rhino users may not like the visual switching between the fine and coarse LODs, so this must be an optional feature instead of a forced behaviour.

I’m aware that there is a “Custom mesh” setting for each object so that certain object(s) could be much more detailed or with lesser details, however, that takes way too long time and is fully manual work. My proposition is to make Rhino a bit smarter and calculate a coarse LOD automatically if the user wishes so.

1 Like

That’s ONE of their tricks. And note each of those LOD levels has been handmade (or AI processed) to actually “work,” to provide orders of magnitude fewer polygons while still looking acceptable. This also heavily involves the textures that carry so much of the visual quality load. An automatic probably-coarser-by-half-maybe mesh on your Rhino models isn’t going to double your speed. And your models haven’t been specially designed to ensure that you don’t actually see too many polygons at any one time.

What really kills Rhino’s display performance isn’t so much polygon count(within reason)as the sheer number of separate objects, each has a ton of baggage attached to it. Joining up surfaces, using blocks, that’s going to help.

its solved in unreal 5 nanite but i think its much more sophisticated than 2 lods. its adaptive.

A post was split to a new topic: Commandline has unwanted space

In my personal experience, polygon count heavily affects the framerate in the viewport, especially in viewport modes with active Global illumination. I have heavy Rhino files with thousands of object and that forces me to set custom render mesh to any screw, nut, rod end, ball joint and basically 90% of the objects, especially those consisting ball shape, because they tend to add an excessive amount of polygons despite their tiny size. Once I spend a lot of time to manually apply coarse custom mesh to those objects, the framerate immediately gets smooth and vastly improves the user experience, despite having the same high number of total objects in the scene.

As for the degradation in visual quality, this is I specifically mentioned “optional” in my request, because some Rhino users with super powerful workstations may prefer to work with very dense meshes in all conditions. :slight_smile: I use a humble Nvidia GTX 1660Ti, so any custom optimization of the rendering mesh has a direct impact on the performance of my Rhino 7.


Here is a direct comparison between the same mirrored object in a scene using “Jagged and faster” by default. The object at left side is unaltered and consists 64ᅠ 912 polygons, whereas its exact copy on the right side uses a custom mesh with the following settings and has a total of 35 772 polygons. This is how an optional, secondary LOD could save almost half of the geometry for user cases where the optimized object is far away from the camera. Once the camera is close enough to the object, Rhino could switch back to the default, more dense mesh:


Keep in mind that threaded screws, balls and many other rounded objects represented by the default Rhino meshing algorithm are tens of times more dense than their counterpart with the same super-quick-to-calculate coarse mesh setting used above. Those two mirrored balls, for example, consist 16ᅠ 128 polygons and 224 polygons, respectively:

P.S.: Also, the user should be able to opt whether the optimized coarse LOD mesh will be stored in the file at the cost of a larger file size, or it will be calculated right after opening of the file in the next session.

Then just use your optimized settings then?

There’s no compelling reason to use the default options, they’re not going to be optimal for any particular kind of geometry, there would be much lower-hanging fruit to try to automatically improve THAT before worrying about implementing LOD.

So that’s really not enough video card to be running 4K, I have a 1080ti which blows that out of the water and I don’t use 4K. And you’re using like a stereo plugin, which renders everything twice? And any future performance improvements are probably going to require a new video card to leverage, as they will all be based on AI…

1 Like

As I mentioned before, manually applying a custom mesh setting takes a lot of time, not to mention that those objects would look coarse in close-up views UNLESS the custom mesh setting is being disabled manually and default mesh re-calculated (takes much more time) every time I need to look at it upclose. :slight_smile: Imagine having to do that 1000 times per session and wait for the re-calculation of the dense render mesh…

You can probably find a happy medium, and crap like bolts don’t even have any NEED to look good at all. Also, blocks helps a ton with those things. Instancing is great, I recently rendered out on my GPU a model with an effective >500million polygons thanks to blocks.

1 Like

I already disabled the SteroView plug-in, yet my most demanding scenes still burden the viewport performance as before. This is why I apply custom coarse render mesh to most models (screws, nuts, ball joints, rod ends, springs, etc) that are less important and already finished. Just tried with disabled custom mesh and Rhino is basically multiple times slower than before because of the huge amount of render mesh polygons of all the rounded objects.

The coarse LOD I propose in this topic is a general solution to all Rhino users and won’t need any action on their side (except that the programmers at “McNee” will have to implement it), will consume less electricity and make their machines less loud. The beauty of the LOD system is that it’s automatic and renders the same object with two different meshes based on their location relative to the camera. Upclose the objects will look nice and smooth, while the coarse mesh is used only on distant objects.

Blocks are not a solution, because dense meshes happen to the majority of objects including car body panels, tubes, A-arms, hinges etc.

Then ask for smarter basic mesh settings that work on a wider variety of objects instead of this LOD nonsense that is one of these things that comes from games and is not really applicable to content creation. Effective LOD models require processing hyper-detailed closeup models, they’re like 4 polygons with textures created from 20 million poly models…see the problem?

I don’t agree on this, because my proposition is about adding an optional secondary, very-fast-to-process, COARSE LOD :slight_smile: as an addition to the default render mesh which is already used by Rhino. :slight_smile:

You have no idea if your settings that work for you on your models are going to actually be “COARSE” for everyone in all situations. And you’re assuming this will add no overhead .

I’m pretty sure that I know enough about 3d polygonal video games and how real-time LOD switching works. I used to be a game modeler 15 years ago, including creating of multiple LOD versions of the original dense models, and I’m an active gamer since 1985. Calculating coarse mesh to NURBS models with the settings I showed above (included here as well) is very fast to process by Rhino itself. It’s the manual selection of objects and adding custom mesh to them, as well as often turning those settings on or off is what makes that process very slow and tedious. Not to mention that Rhino will be forced to re-calculate the default dense mesh every time I turn off the custom mesh. The LOD approach from my proposal avoids that entirely. Plus, it could be optional, so it will not do any harm to those users who prefer to see overly-dense models all the time, despite being 1-2 pixels small on their screen.

Another smart optimization could be that if any object in the distance gets smaller than a certain amount of pixels (such like less than 10x10 pixels), Rhino could switch if from shaded to wireframe (! _SetObjectDisplayMode could do that, but is a fully manual command). Why waste resources on distant objects small enough on the screen, when those could be automatically and temporarily seen in their wireframe unless they are big enough again to be seen with their render mesh? :slight_smile:

4 Likes

I’m curious what the Rhino developers think about the aforementioned ideas (auto-switching to a 2nd, coarse LOD, as well auto-switching to Wireframe for distant/small objects) and whether they are interested in making Rhino’s viewports perform much better than now. :slight_smile:

Auto-Wireframe for distant nad smaller objects.3dm (2.8 MB)


Note that the object on the right side has zero render polygons, because it’s small enough to auto-switch to Wireframe mode. It’s about 42 pixels big in the viewport and there is almost no difference in quality compared to the object on the left side, which is made of 18 206 triangle polygons. Imagine having 1000 of those objects in the scene, they will result into more than 18 million triangle polygons total.
Auto-Wireframe will make it possible to still have them in the scene and those of them that meet the criteria to be smaller than X amount of pixels on-screen (user-defined setting) will free-up a huge amount of resources from the graphics card. The beauty of Auto-wireframe is that once those objects become bigger than the target size, they will look just like any other shaded object in the scene.


An extreme close-up of the 4 variants of the same object. The wireframe model will auto-switch to shaded mode once its size becomes bigger than 40x40 pixels (or any user-set size).

These discussions are irrelevant. Do you think the developers have not heard of this? And virtually everyone who brings this up has a potato for a computer, no future implementation of video-game-style LOD tech is going to work on anything but a new monster machine, it will be leveraging AI or whatever that requires at least an RTX 3080 to see any benefit.

And frankly, no, performance is not the primary concern, it’s compatibility with a wide range of hardware and ease of implementation. Only video games with more developer resources than the entire CAD industry have the budget to worry about performance, and they often fail!

And once you’ve worked with truly huge models and possibly dabbled in some programming with display conduits, you realize that the polygon count is only of marginal impact, it’s not really what makes Rhino “slow.” It’s that each separate “object” in the model brings a lot of baggage with it, slowing things down even if it’s just a point with zero polygons. Actually speeding up Rhino’s display requires somehow decoupling those things…which I will not even presume to guess what that might entail.

You have to ask the Rhino developers whether they have heard that before or not. I’m a basic Rhino user and as such I try to give my honest opinion and suggestions how Rhino could become a better program (not just for me, but for all users as well).

Keep in mind that working in Rhino with a 3d mouse further impacts the framerate in the active viewport. Polygon count has a HUGE effect on the framerate, it’s not just the amount of objects in the scene. It’s a night and day difference while navigating with a 3d mouse in a scene with far less polygons than what the default mesh settings in Rhino offer on exactly the same 3d scene.

Do you see any usable advantage of having overly-dense render mesh on distant/small objects? I would rather have zero polygons on 1000 distant screws automatically set to Wireframe (once they get smaller than 40x40 pixels each), over 18 000 000 polygons that will eat up resources and hit the framerate quite badly.
As I mentioned before, those wireframe objects should become shaded again once they are above the target size. This is not something that requires expensive development, AI, CPU and GPU time for calculations etc. It’s just an automatic switching to different display mode (between shaded and wireframe), which is very similar to the existing ! _SetObjectDisplayMode command, except that it should be done automatically.