Yes, very interesting… Hmmm…
Philip
Yes, very interesting… Hmmm…
Philip
Hi All, Yes. I have read the article above and a few others regarding the Nvidia Geforce CAD crippling of the 300-> range of geforce cards in order to sell more quadros. The GTX8800/GTX9800 and GTX200 range were basically exactly the same as quadro’s short of different firmware and you could even flash a geforce card to be a quadro. This was hurting Nvidia quadro sales so Nvidia did something about it from the 300 series on. Firstly from the 300 series geforce on you cannot flash the firmware anymore, and Nvidia looked for opengl features that were specific to CAD use and disabled them in Geforce firmware.
That is why I noticed a serious decline in performance when going from my GTX280 to a GTX660.
And this is also why I only really got a performance increase from my GTX280 when I got my Quadro6000.
There is another post here also discussing someone who still sticks to GTX200 series cards because of their great price and performance compared to modern Geforce cards.
At least the many cuda cores in the Geforce cards can be put to use as rendering cores in many rendering engines now. I use my Quadro on Primary Monitor and then I use the 960 x cuda cores of my GTX660 for Vray RT and to drive my 2nd monitor. Despite what many websites say the Quadro and Geforce work quite happily in the same PC. Michael VS
Hi Michael, you got a perfomance increase? I jumped from GTX285 to Quadro 6000 and it maybe a 25% increase only. Did I something wrong? Did you try the 5x5x4 test? What are your times?
Hi Micha
Yes, only a slight increase so it could have been around 25% too. I didn’t use a test tool, I just used the rotation speed of large models as a guide and I got a noticeable frame rate increase with the same model.
At the time I used a free gaming program called Fraps which displays the frame rate in the corner of the viewport to see the increase, and I got a few more frames per second on the Quadro6000 than with the GTX280. The Gtx660-2Gig was quite stuttery on the same model so that was obviously slower.
For my Quadro6000 I found that using the Nvidia Quadro ODE Drivers worked much better that the so called “Performance Driver”. I see there is a new version here:340.66
http://www.nvidia.com/download/driverResults.aspx/77637/en-us
I then set the Global Settings in NVidia Control Panel to:
Workstation App - Dynamic Streaming
I also found that the 3D App - Visualisation option worked ok too. - Michael VS
I have a Lenovo W530 laptop with a Quadro K1000M
(i7-3740QM 2.7GHz)
with a 5x5x4 spheres, r=5, space = 12, default mode, 4x AA
I get 0.98 seconds with 4 viewports and 1.20 with a single viewport
I have read that somewhere else before as well…
Maybe Rhino has been granted “CAD software” status by nVidia and they enabled a setting in the Quadro driver?
I remember SolidWorks had display modes only available when a Quadro was present.
In case it’s of interest my GTX650 (2gigs of ram) is so fast I can’t measure it. Not by my watch anyway (with or without shadows). This is probably a daft question but we are not including the time it takes Rhino to build the initial rendering mesh are we?
type “TestMaxSpeed” into the command line (without quotes)
Try typing the command TestMaxSpeed on the command line (does not autocomplete). It will report the result. Make sure your viewport is maximized and in Shaded mode with at least surface edges showing.
–Mitch
Quadro K4000, i5 on a Z77 itx board with 8GB ram, no overclock. W7 64 bit. 4x4x5 spheres in maximised perspective viewport (shaded), 4x AA/Nearest/ medium AF = 1.78 seconds in TestMaxSpeed. Viewport is 2354x1397 as it’s on a 30" monitor. Same test with no AA, no AF = 1.72 seconds. Both tests are repeatable and consistent.
Quadro6000 with i7-3770: I’m getting about 1.84sec.
Anyone else notice that theirs actually runs about 0.3sec faster with the mesh settings on Smooth and Slow compared to Jagged and Faster? I get repeatable 1.84s (S&Slow),and 2.14s (J&Faster)??-Michael VS
Wow, lots of traffic since I last checked…
Guys, I’m going to put my head down on this one today and finally figure out what the heck is going on… If I’m not replying, it’s not because I’m not listening…be patient… I WILL find this and hopefully find a way out.
-J
My money’s on the Dilithium Crystals…
Ouch!
Render mesh = jagged and faster
GL anti-alias = x8 Time = 9.31secs
x4 Time = 7.16secs
x2 Time = 6.36secs
Yep…
Found a Quadro K2000 2Gb on sale here, ordered and will get it tomorrow… Not that I don’t trust you Jeff to find a solution, but it was a good deal (a lot less than the 780) so I jumped. I can always use the two it seems according to Michael VS…
–Mitch
Ok…update.
I have “sort of” good news… Digging into this I almost immediately found differences between solids and “open” surfaces… kind of by accident… I’m writing a sample application that I can make quick changes to for testing…and realized that drawing solids seemed to be working faster than Rhino drawing solids (i.e. a sphere). So poking around I think I’ve found a serious difference between Quadro drivers and GeForce and the way they deal with culled faces or not… In other words, double-sided polygons seem to really suck on GeForce cards/drivers.
Anyways, long story short…a small tweak and I’m already seeing 2.5x increase in performance…However, I’m not sure what a true, robust solution is at the moment.
I’ve attached a plugin that will apply this tweak during the rendering process, so I want to see/know if all of you are seeing the same increase…
Unzip and Install the plugin
Load your file
Run TestMaxSpeed
Run GeForceTestEngine
Run TestMaxSpeed
Note: GeForceTestEngine is a toggle…it will turn ON/OFF the tweak as a toggle…pay attention to what is printed in the command line.
In the mean time, I will keep digging into possibly more differences…
-Jeff
GeForceTestEngine.zip (12.3 KB)
Mitch! … I guess I wasn’t fast enough for you
-J
Hi Mitch
To get both Geforce and Quadro card working together in 1 PC I first installed the Geforce on it’s own and installed the Nvidia Geforce Drivers, I then moved the Geforce to the 2nd PCI-E slot and fitted the Quadro, and installed the Quadro Driver over the Geforce Driver. Control Panel then gave me the additional Quadro-only options but still knew how to use the geforce card. Hope that helps. Michael VS
On a GTX 780Ti, I am seeing 4.34 with GeForce Test Engine disabled, and 1.70 with it enabled. This is for spheres. When I run a more mixed model (an assortment of closed and not closed surfaces / polysurfaces), I see less of a difference, 13.98 with it off, 14.20 with it on (on was slower). With 10,000 cubes, I am not seeing much of a difference, 20.28 with it off, 20.16 with it on
Ya, once you start getting up into 10,000 polysurfaces, you’re really starting to test/see the bottlenecks with Rhino’s poor object management, and not with the GPU. 10,000 cubes equates to 60,000 individual mesh draws… If you really want to see how well your GPU does with that, then extract the render meshes and join them into a single, disjoint mesh, hide the polysurfaces, and run TestMaxSpeed (run it a few times to make sure any caching that occurs kicks in)… that will show you how well your GPU can handle that many polygons… any other way will be interfered with by Rhino’s overhead.
That being said, the spheres example does seem to scale though… 800 spheres results in over 2x increase…even 3200 spheres sees over a 2x (almost 3x) increase…