Quadro 6000 - not working like expected


I got a Quadro 6000 for a good price (but still a lot of money) and I’m glad to have more memory available to work on several Rhino tasks with complex models at the same time + PS … .

But the speed of the Rhino display is the same like at my old GTX285. The bike test model needs the same time for testmaxspeed and my last train project model run’s at 0.5fps like before.
From what I have read here the Quadro 4000 was faster than the GTX285, so expect some more speed. I deinstalled all old Nvidia stuff and installed the latest driver. I set the global settings to dynamic streaming.

Did I choose the right driver? (see screenshot)

What can I do? Are there more tricks?

And when will Rhino use the power of this card? Please RMA team, I did my part and bought a modern pro CAD card. Please give me more speed.


1 Like

Don’t use the performance driver.
It has been unstable each time I have tried it. Go for ODE.

And then go into the driver and set it to Workstation.
(I’ll find the screenshot on where the setting is on Monday if you need, I am at the laptop now)

AND know that you will not get much better performance, the card will most likely run at 20% load on Nurbs with meshes. But you should know that by now :wink:

Yes, I got several system freezes but since I found this fix http://www.youtube.com/watch?v=Zt00C-HXFbA (disable Powermizer) the performance driver seems to work now. Should I switch to the ODE driver still?

I found the global setting for the workstation.

I get max 14% GPU usage here. :frowning:

Never before I tested the test bike model at MoI3D and so I did it today - no lag, fast like expected. Michael from moi3d seems to be a genius or the McNeel team is to weak for writing a fast display pipeline. The gap between what could be possible and what is possible will wider and wider every month, every year.

Holo, attached the Quadro 6000 Holomark 1 score. At GPU1 (curves/nurbes) I miss the speed and for GPU2 (synthetic mesh test) usage it’s extreme fast.


Holomark 1.23

Score: 23663 Clocked time: 21.13 sec (Total: 70.87sec)

GPU1: 4.77 sec.
GPU2: 0.02 sec.
CPU1: 10.89 sec.
CPU2: 5.45 sec.


Intel® Xeon® CPU E5-2687W 0 @ 3.10GHz
2 CPU(s) with 8 Cores at 3.101 GHz (total: 16 LogicalProcessors)

NVIDIA Quadro 6000 with 6GB
2560 x 1440 x 4294967296 Farben

Didn’t someone say that MoI is running DirectX and not OpenGL?

I read it somewhere too. For me as user it doesn’t matter what it use, only speed counts. So, why not add a DirectX display mode for the pro users with complex models?

I can not see that switching to DirectX would change any thing, as OpenGL is about just as fast. It is just that the rest of Rhino is not optimized to feed any graphics pipeline at full speed, expect with large meshes.

Hi Micha

In real world tests we found that the K2000 card was the best speed/price balance for Rhino. Rhino doesn’t speed up exponentially with the more costly cards.

However, if you’re using V-Ray then you should see good results with V-Ray RT and CUDA acceleration. We’ll be testing these cards with V-Ray RT shortly.

Phil Cook


Well, cuda performance is certainly no reason to choose a quadro card (e.g. quadro 6000 -> 448 cuda cores, K2000 -> 384 cuda cores ).
Quadro cards are traditionally designed for display performance, if you want cuda performance you are much better off to buy a second (or third) card like a titan (2688 cuda cores), ti 780(2880 cuda cores) or for better a price performance ratio a used gtx 590 (1024 cuda cores) and mount these next to your quadro card. This way you can use the cuda power of the gamecards for rendering while retaining a good display performance with your quadro card (which should of course be excluded from rendering…)

however since Micha is interested in display performance that won’t help him…

best regards


The primary point I was trying to make is that in Rhino display terms you won’t see much performance increase (especially compared to price differential) beyond cards like the K2000.