Firepro W7100 vs Quadro M4000

Does anyone have experience of using these cards with Rhino?

I’m veering towards the Firepro because it’s a little cheaper for a pro card. Also, and please don’t shoot me down for this, I’d like to game on my system from time to time.
All my recent research seems to point to the Quadro being the better card but not necessarily by miles and gaming performance is not supposed to be too hot on the Quadro.
I have read about poor anti-aliasing performance on the Firepros though. This is often on older forums and the W7100 is not mentioned, have they sorted out these issues with the more recent cards.

The alternative may be to use two cards. One pro workstation, one mid range gaming. Plenty of people seem to have done it though I wouldn’t have the first idea how to make that work.

Any thoughts?

What kind of projects do you do?
I use a quadro 4000 on one machine and a gtx 970 on another. I also have a firepro v7900 that collects dust because nVidia wins hands down over amd when it comes to curve anti aliasing, so I hated doing design work on the v7900, it made the viewports look like they ran on an upscaled resolution.
(The 970 is a great gamingcard too)

Most of my work is prototyping for propmaking. Some output to 3d print, some concept work, where rendering becomes more of an issue and some technical assemblies. I need a balanced system that will also stand me in good stead with Solidworks and Inventor. What I’m doing at the moment probably wouldn’t tax these cards but I need the scope to handle things as they get more complex. Maybe I’m aiming too high, and too expensive.
I can’t see myself getting two machines though so I’d like to research twin cards if possible.
Does anyone know if the W series Sapphire cards have upped their game compared to the V series?

I ran the quadro 4000 alongside a gtx 260 for a while, but the system wasn’t stable. (both cards alone were more stable than together) so personally I can not recommend it. But the idea is good and I would love for it to work. Maybe things are better now though.

You need to check if SW and the others benefit a lot from quadro or firepro, if not then I would go for geforce.
If not then you need to check out the firepro before buying, it might be a great option for you, and if the laptop has a high resolution screen then you can consider turning off AA.

Good luck.

Sorry, just checking. “Laptop”?? I’m building a tower.

Good advice on stability issues though assuming you are referring to a tower build.
Both other softwares have both cards on their recommended list so still difficult to know which way to go. I do know though that the Quadros are supposed to be junk as far as gaming is concerned. Maybe I should just get a PS4 and have done with my two-tone aspirations.

My bad, I didn’t know nVidia had released a new model, they have a 4000m chip for laptops. Since the M4000 uses the same maxwell chip as the 970 then I am sure it is ok for gaming too, it will be a bit slower 10-20% I presume, since quadros are clocked slower for increased stability at 100% load over long periods of time.

I presume you have to test for your self, buy the 970 if cost is key and the M4000 if speed in SW and stability is of essence. (And buy from a place with a return policy so you can change your mind if needed).
The M4000 seems to outperform even the K5200! But I could not find any good reviews on the card, nor comparisons.

You ma also want to consider whether you may end up using a renderer (like Octane) that (currently) relies on CUDA technology.

I have a W7100 and it works just fine for CAD and gaming. The only reason you should lean toward the much more expensive M4000 is if you need the Cuda rendering. If you already have a rendering program that uses Cuda then go for it. The W7100 uses the OpenCL standard for rendering.

Something else to consider is if you want to add another card later down the road is that the nVidia’s must use the same exact card for multi-GPU configurations while AMDs it does not matter.

That sounds good though I’m still trying to drag myself out of the primordial soup as far as my knowledge of the technology is concerned. Not really sure what CUDA cores are and how the presence of or number of affect things.
I’ll be using VRay and whatever native rendering that Solidworks and Inventor possess.
V-Ray looks like it can render CPU, Open Cl and CUDA, so I don’t necessarily ‘need’ CUDA. As it’s available though, am I right in thinking CUDA is best here.
Also, what about anti-aliasing?

Disclaimer: I work at Nvidia, so can’t claim to be objective in this discussion, but wanted to explain a few things.

CUDA is Nvidia’s technology for programming GPUs. Depending on your task, the programmable CUDA cores on our processor can be used for real -time OpenGL or DirectX display of 3D and 2D graphics on monitors - drawing pixels, to put it simply. This is the basic functionality of all graphics cards, and can even be handled, at a very basic level, by the integrated graphics on some CPUs. the differences become apparent at high resolutions, with complicated data sets, and in the subtle details, such as texture display and antialiasing, and rendering of shadows and ambient occlusion.

However, CUDA is also used to reprogram the cores to perform general computational processing tasks, like ray tracing with GPU-accelerated renderers, such as Octane, or NVIDIA Iray. It can also be used for other computation, such as accelerating many parts of the Adobe video production pipeline (e.g., Mercury Engine). CUDA runs on ALL recent Nvidia GPUs…

OpenCL is an open source API for programming GPUs, sometimes used used by developers supporting AMD or the Mac OS. It also works fine on NVIDIA GPUs. I won’t compare, except to note that Otoy (Octane), The Chaos Group (V-Ray), Redshift, Adobe, etc, actively develop in CUDA, but not OpenCL. They have many good reasons.

I suggest you visit and check out Iray for Rhino, as you contemplate your choices. As a 3D visual effects artist and designer, having worked for two decades with nearly every renderer you can name, I can say with confidence that GPU-accelerated Ray tracing will forever change the way you think about and use rendering. You won’t regret buying the best , highest-end GPU you can afford. More CUDA cores and more memory will mean more time creating, and lest time waiting. If it were me, my choice of renderer would very strongly influence my choice of graphics card.

There is more to the decision than graphics. Much of the work that goes into Quadro is focused on improving the user experience through driver improvements, and working with software developers, such as McNeel, to optimize and certify performance and functionality. it also includes development of useful features, such as highly customizable multi-display configurations.

Finally, as for gaming, for a given GPU processor, Quadro is clocked somewhat more conservatively, to improve stability, reliability, and longevity, as these are core values for most professional users. If you really want an overclocked card for raw gaming speed, by all means buy one. It will probably work fine, most of the time, for Rhino, as well. If it’s an Nvidia GPU, it will work with CUDA, too. However, only Quadro will give you the confidence of knowing the card has been certified by software developers to reliably run their professional design applications, year after year, and that it has the support of a team deeply committed to improving the experience of professional designers.

1 Like

Very interesting topic - I’m in a similar position (W9100 vs. Quadro 5000/6000) and have not yet made the final decision. Extensive tests and benchmarks at,testberichte-241531-17.html suggest that the W9100 beats the K5000 and comes close to the K6000 for a far better price. However, for me the performance with Rhino and SiTex renderer for 3D and Corel for 2D is the key issue. The critics on NVidia is their proprietary CUDA API vs. OpenGL/OpenCL architecture. I’m not a hardware specialist, I can only parrot what I read.

At the moment, I’m using a six year old machine with a Firepro V8800 and a Xeon 6core @ 3.3 GHz. It’s still fast, but the BIOS is quite outdated and cannot talk to 4TB drives, also USB is still 2.0. The new machine will have a Xeon 14core, but still not sure what graphics card to take. Is there a recommendation from McNeel?


One thing that seems worth considering, if we’re going with the Quadro cards, is the M series cards.
Quite a few reviews and benchmarks put the new generation of Maxwell cards a fair way ahead of the older Kepler, K series with the M4000 outperforming the K5000.
Seeing as I’m 75% about having a graphics workstation and 25% a gamer, I may end up with a Quadro. As long as using a Quadro doesn’t make all games look like Manic Miner.

On a side issue, when using Rhino day to day for drawing on, pre rendering, with complicated models, is it CPU power and RAM that are the most important.

Care to elaborate on that sweeping statement? Convince me that I should spend an additional $400 for essentially a similar product.


Quadro, GeForce and other brands of GPUs, all have advocates, and all have their reasons. CUDA requires NVIDA GPUs (and also works on name-your-brand CPUs.) For some, OpenCL is a good solution to reach additional computing customers. OpenCL runs well on NVIDIA GPUs, as well, and it turns out V-Ray has added reliable OpenCL support, contrary to my statement - since I last visited its forums last year. My apologies for the error…)

Here is a backgrounder on CUDA:

CUDA is a computing platform that offers a deep and wide development environment, hardware, and support, for high performance computing. It has extensive support for a variety industry-standard programming languages and coding tools. Nvidia has built a thriving brand (TESLA, not to be confused with the electric car company) on the merits of this platform, and the incredible, reliable performance CUDA delivers. There are good reasons why TESLA (the car company) relies on NVIDIA GPUs as the computers driving its self driving cars, and extremely good reasons why Oak Ridge National Laboratory uses both AMD and IBM CPUs - but only NVIDIA GPUs with CUDA - in its new Summit and Sierra Supercomputers. Obviously, it’s not blind brand loyalty:

GPU programming is relatively difficult, compared to programming on CPUs, because problems have to be structures and solved as massively parallel operations. Realistic rendering is a perfect example of a problem that is well suited to this type of computing. Because of the complexity of these problems, NVIDIA invests heavily in its computing platform, which includes not only hardware and software, but training and support and and developer conferences (such as the GPU Techology Conference: ( Note who attends GTC - industry leaders and computer scientists driving innovation in technology, business and science - but very few game companies.

Nvidia also does continuous driver and feature development and refinement for its professional products, such as Tesla and Quadro, to ensure performance, stability, and usability, that increase over the typical several-year life of a product.

For a 3D artist, this support boils down to a developer’s ability to quickly deploy new features to its tools, including 3D rendering software, and for the user to extract remarkable performance, day after day, from whatever NVIDIA GPUs he has in his system, or on his network.

When you asked me to elaborate about the reasons to spend $400 more on a GPU, you were asking me to justify the additional cost associated with developing and maintaining the NVIDIA ecosystem, and how that platform will benefit you for the next several years, as you use the card. Beauty is in the eye of the beholder. So is value. Whether you see $400 (or $300 million) of additional value in NVIDIA GPUS, will depend on the potential you see in that ecosystem.

Here are some GPU-accelerated renderers that use CUDA, many with Rhino plugins. (About a third also support OpenCL). Whatever the GPU, I suggest you give them a try:











1 Like

In case you still haven’t purchased…
I had both the K5000 and M5000 for a week and tested both. The M5000 was equal to the K5000 in every way except in Rendered view where it was twice as fast. I didn’t end up keeping it since I mostly work in Shaded view (where there was no improvement), but it seems like a workstation card that can handle gaming at QHD. I made a graph comparing Holomark scores for the 980Ti, K5000, and M5000 in Rhino. Check my post history (it’s the only other post).
I also tested the V7900 from AMD and the viewport quality difference (line quality) is quite real and makes drawing harder.