Low Viewport Performance and GPU utilization RTX3080 Rhino7

Hello everyone.

My viewport response and fps seem rather sluggish with Rhino 7.
I have a 3D model consisting of 1400 and surfaces 7800 polysurfaces.

I am using a Legion 7 laptop with 32Gb Ram, AMD Ryzen 9 5900HX, RTX3080 with 16Gb VRAM which can go up to 150 Watts. All my drivers and Rhino 7 SR14 are up to date. Running Rhino in performance mode and also making sure that Nvida setting prefers performance, etc. Games and Twinmotion run like a charm and I have seen great fps, GPU utalization going up to 100% while it reached 150 Watt peak. This info was shown via Nvidia overlay and not windows task manager.

Whereas in Rhino 7 I get:

Arctic viewport:
GPU draws 30 Watts has an GPU utilization of around 10% and gives 6FPS

Shaded viewport:
GPU draws 35 Watt has an GPU utilization of around 22% and gives 30 FPS

How can I make Rhino 7 use my GPU to its full potential?

I have looked a lot in this forum and there are many questions regarding this issue but not any clear answers. I also tried Nvidia Studio drivers…

Games and content creation are completely different things, they are not comparable, stop comparing them!

Games are ‘static’ and have been processed and every aspect carefully designed to optimize performance in a way that is entirely impossible for content creation. How many million polygons is your model? Have you built it in a way to make sure you don’t see too many of them at once? And spent months at it and more money than the entire CAD industry spends on this stuff on this one model? Are all your round objects 4 polygons and a normal map?

Looking at Task Manager to figure out if your GPU is ‘being used to its full potential’ is not actually very informative. Most of the time Rhino is sitting idle waiting for you to do something. And it’s doing so in the context of a regular Windows application, so 5000 things happen every time you so much as click the mouse.

Note that with V6 Rhino’s usage of higher-end OpenGL features was increased to better leverage high-end hardware, with a resulting massive increase in performance for anyone using adequate hardware, but now that Rhino is no longer optimized to run on a potato the biggest support question here is “Why does Rhino not run well on my laptop with specs that were mediocre in 2005?”

And of course content creation also doesn’t benefit from tons of CPU cores except for certain tasks. 9 women can’t make a baby in 1 month! This is the kind of thing that should be basic computer knowledge.

A 10-figure object count is rather a lot. What are you doing?

I’m intrigued. How are you getting this number for a Rhino viewport?

TestMaxSpeed?

Huh? The OP didn’t

How is this relevant to the OP’s machine - launched in 2020 (and widely held to be one of the best games laptops currently available)?

It is a lot, but where did that number come from?

Thanks, Just love those hidden test commands!

It means the extent to which all the latest hardware features can be used is limited.

The best way to really use a 3080 is with a GPU renderer, the OpenGL display doesn’t use CUDA for anything.

That’s good advice.

Of course it does use the hardware of the Nvidia card. All the nurbs you draw are converted into polygons processed and fed to your screen via the graphics card.

And??? None of that OpenGL realtime stuff hits the CUDA cores. Raytraced mode does.

Look this is all besides the point. I want to know if there is a way to make better use of my hardware to get a smoother viewport performance and not some virtual dick measure contest, or you talking some stuff about pregnant women and derailing the discussion.

The short answer is there isn’t, Rhino uses the OpenGL features it does and that’s it. It’s not a game, it’s not comparable. Especially if you don’t tell us anything about what you’re actually doing, why you have 10K+ objects in your model, and how many polygons they add up to.

People constantly ask why Rhino doesn’t use “100%” of their CPU, when as should be basic computer knowledge by now adding cores doesn’t magically speed anything up. Parallelization is very hard. Certain very narrow tasks can be, and some are, there will be more in the future, but in general “content creation” is a linear task. Do this, then that, then that. More cores can’t speed that up. Ergo, “9 women can’t make a baby in 1 month.” It’s a common analogy on the topic.

1 Like

Set one of the viewports to Raytraced display mode.

– Dale

1 Like

I am looking for a better viewport performance not CPU usage. Rhino translates my nurbs model into polygons. Sure, I have the option to have it more jagged and faster converting it into less polygons. How many polygons that are in the end I don’t know. The surface and polysurface count I provided is to indicate that it is indeed a heavy and fairly complex model. I do not know what components of my hardware Rhino is using via the openGL interface and how much of the resources are allocated to polygons or shaders, etc. All I see that it could potentially draw more power and utilize the GPU more but instead uses 1/5 of the power. When I import the model into Twinmotion, I add a lot of people vegetation and urban context and I come up to a polycount of 40 Million. With a lower viewport resolution, additional shaders, etc. I can reach 25 fps to do walkthroughs. There the GPU is utilized to the max. I understand that those two programs are not entirely comparable. However, I still wonder if there is a way that Rhino makes more use of the hardware resources available.

Certain strategies can improve performance–like doing every trick you can to have fewer separate “objects” in your scene–but no you can’t just “make it” use “more” resources, that’s just not how it works at all. It’s using everything it’s coded to use as fast as possible. And again, comparing to a blasted Unreal engine tool is not comparable. Do you not think there’s a reason 3D creation software doesn’t use such engines for their displays? Do you think maybe it’s that it wouldn’t actually be faster once you dealt with all the specific requirements of such a thing?

Yes. Indeed it does. But may be you can shed some light here?
May be when I see 22% GPU utilization and 30 Watts power draw in shaded mode that this is already max usage since that specific viewport setting does not require the rest of the resources or cant even make use of it?
Is there some link or info out there which explains that?

Yes. Shaded mode is by default very basic shading, a small bit of the GPU is dedicated to that. All the additional ‘resources’ it’s ‘not using’ are for adding more elaborate effects, not for speed…that’s the short answer anyway. Of course your monitor that’s saying it’s using “all” the GPU in games is not correct, the CUDA cores are a separate thing, and they’re not generally used. They are used in Raytraced mode and other GPU raytracers, and are pretty awesome.

1 Like

I still use a GTX970 and have no issues with the viewport. The issue that you’re experiencing has nothing to do with CUDA cores or OpenGL.

Download the updated drivers from Nvidia (NEVER UPDATE DRIVERS FROM WINDOWS UPDATE) and then try a clean install of the drivers with Display Driver Uninstaller.

Additionally, the drop in performance could be associated with the NVIDIA overlay. Disable it.

If you want an accurate measurement of what your GPU is doing, download HWinfo and/or combined with MSI afterburner and configure the parameters.