Does Rhino 6 poorly uses the video memory of GeForce cards compared to the Quadro?

Recently, while working on a complex model in Rhino 6, I noticed that it poorly uses the video memory of GeForce cards compared to the Quadro. This entails greater use of the computer’s RAM.
I compared two cards: Quadro M2000 and GTX 1050 Ti. Each of them has 4 GB of VRAM. I used the GPUmeter and CPUmeter application for testing.
When rotating the complex model, the M2000 card used about 3 GB of video memory, and GTX 1050 Ti only 0.7 GB. Similar differences occurred for other models. I launched benchmark Holomark 2_R6 and also received big differences in the maximum usage of VRAM: M2000 - 4GB, 1050 TI - only 1.3 GB. Generally, GTX used about 3-4 times less video memory than the Quadro in the same task. Of course, the use of the computer’s RAM was then greater for GeForce.

Out of curiosity, I compared the video memory usage in the benchmark SPECviewperf 12.1. Quadro in most cases used VRAM to a greater extent than GeForce. However, such large differences in using VRAM as in rhino 6 were only in the Siemens benchmark. The smaller the use of VRAM, the more computer RAM was used, which is undesired. Below I present the results of the maximum use of VRAM for individual benchmarks, the first for M2000, the second for GTX 1050 Ti:

3dsmax: 3.6 and 2.8 (GB)
catia: 2.6 and 2.0
creo: 2.8 and 1.3
energy: 4.0 and 4.0
maya: 3.3 and 3.2
medical: 3.2 and 4.0
shorza: 3.8 and 3.6
siemens: 2.8 and 1.0
solidworks: 3.3 and 2.5

Do you have similar observations and is the Rhino 6 poorly optimized in GeForce video memory management?

Did I misread your post? I would expect, for consistency, that you would post results for 4 Rhino combinations: one for each of Rhino 5 using Quadro, then GeForce, and Rhino6 using Quadro, then GeForce.

The tests show that Rhino 6 uses VRAM of GeForce cards to a small extent, compared to other CAD programs. Unfortunately, this brings increased memory usage of the computer. For Quadro cards this problem doesn’t occur. That’s why I’m wondering if this is some memory management error in Rhino 6 that can be improved.

Does the tests for Rhino 5 could help in anything? Out of curiosity I will do it, but in a few days

Hi - we don’t optimize for a specific GPU.

There might be settings in the drivers that you can change to increase performance.

So…you’re saying the Geforce isn’t consuming enough video memory? How did you “notice” that outside of running benchmarks? What exactly was the actual problem you saw? That is literally something I’ve never seen anyone in any context using any game or CAD ever complain about. Was the GTX a lot slower than the Quadro? Is that unexpected? Neither of the cards are particularly “good” by the standards of even when they were new, so the standard for them to meet is that they don’t crash, anything beyond that is gravy.

If you want Rhino 6 to have consume a lot of VRAM just create a heavy scene (lots of geometry, easier with just huge textures) and switch to Raytraced :wink:

The use of memory during benchmarks show CPUmeter and GPUmeter. As I have already mentioned, the smaller use of VRAM memory involves greater RAM usage. Benchmarks for GeForce, which used VRAM to a small extent, consumed about 0.5-2GB more RAM than for Quadro, so the difference is significant.

I’m sorry but…and? If there is in fact an actual problem–which I at this point doubt, benchmarks are not to be taken as gospel for many reasons–it’s likely on the driver side, nothing McNeel can do about it.

Have you ever heard about memory management optimization? Because here we have this problem. You probably heard that Specvievperf is a benchmark for real CAD programs? Read the thread from the beginning, and it clearly states that I noticed the problem of memory management in the real use of Rhino, before I carried out the benchmarks.

I’m sorry, but I do not want to argue about the facts.
I’m just interested if other users have noticed similar differences in memory usage by GeForce and Quadro