Recently, while working on a complex model in Rhino 6, I noticed that it poorly uses the video memory of GeForce cards compared to the Quadro. This entails greater use of the computer’s RAM.
I compared two cards: Quadro M2000 and GTX 1050 Ti. Each of them has 4 GB of VRAM. I used the GPUmeter and CPUmeter application for testing.
When rotating the complex model, the M2000 card used about 3 GB of video memory, and GTX 1050 Ti only 0.7 GB. Similar differences occurred for other models. I launched benchmark Holomark 2_R6 and also received big differences in the maximum usage of VRAM: M2000 - 4GB, 1050 TI - only 1.3 GB. Generally, GTX used about 3-4 times less video memory than the Quadro in the same task. Of course, the use of the computer’s RAM was then greater for GeForce.
Out of curiosity, I compared the video memory usage in the benchmark SPECviewperf 12.1. Quadro in most cases used VRAM to a greater extent than GeForce. However, such large differences in using VRAM as in rhino 6 were only in the Siemens benchmark. The smaller the use of VRAM, the more computer RAM was used, which is undesired. Below I present the results of the maximum use of VRAM for individual benchmarks, the first for M2000, the second for GTX 1050 Ti:
3dsmax: 3.6 and 2.8 (GB)
catia: 2.6 and 2.0
creo: 2.8 and 1.3
energy: 4.0 and 4.0
maya: 3.3 and 3.2
medical: 3.2 and 4.0
shorza: 3.8 and 3.6
siemens: 2.8 and 1.0
solidworks: 3.3 and 2.5
Do you have similar observations and is the Rhino 6 poorly optimized in GeForce video memory management?