Can graphics cards improve the experience in Rhino or other 3d modelling programs?
I’m sure the answer to my question may depend on the 3d modelling program one is using.
I primarly use Rhino and Revit, and intend to start using Maya in the near future. I’m looking to buy a new laptop. I found one where I can get dual Nvidia GeForce GT750m GDDR5 2GB graphics cards in it, but before I pull the trigger, I wanted to get some feedback from the Rhino crowd.
I read elsewhere that Nvidia’s GeForce cards really don’t help programs like Rhino that much, if at all. I was hoping someone on here could give as clear of an answer as possible.
I know my question may be vague or immense. Feel free to ask further questions if needed.
They can make a big difference, as they affect what you see on screen and how fast that view updates, along with things like real-time shadow quality in rendered mode and anti-aliasing quality of lines and edges. If you work with very simple models, a better card may not feel very different than a cheaper card, but the more you are displaying on your screen at once, the better of a card you’ll want. It can be the difference between a slick, smooth experience, and a jerky, uncomfortable one (visually speaking).
As Heath mentioned, GPUs make a difference for sure. The topic is immense though so I’m including our wiki page on this subject that branches into other subsets to consider as well such as dual GPU systems. http://wiki.mcneel.com/rhino/rhino5videocards
Opinions can vary between users with the same card too though. If you look at reviews for any GPU out there you’ll see this. Any feature or issue is often a combination of the software and OS being run and the drivers for the GPU itself. For instance, one of the most common fixes for reports of display issues in Rhino that I suggest is ‘try updating the GPU drivers from the manufacturers site’. New laptops will have old drivers most of the time and this can mean a fancy new display feature in Rhino 5 doesn’t work well. In short though, I have been pleased with the Nvidia Geforce cards in laptops that I’ve used. If you compare these to an Intel HD chipset they are actually doing quite a bit.
how practical (possible?) is it to put some of the calculating on the gpus in a modeling application? for instance, i’ll sometimes wait quite a while for certain operations to complete (booleans) and during that time, the gpus basically sit idle… so while i’m sitting there waiting, the openCL type thoughts start creeping in but from a technical/programming standpoint, i have no idea what’s possible.
Don’t expect to much, my experince is, if the display is slow, because the model is complex, than the old GTX285 is the best, modern cards are not faster, most slower. Look around at the forum, there was some discussions and test result. The problem is that Nvidia pushed down the OpenGL performance (that is need for Rhino) for new cards, so that old cards are faster than new cards often and on the McNeel side there isn’t found a solution for an other display pipeline that is using the modern GeForce GPU power.
Last I heard, the devs were going to look into what might be possible in this area. I haven’t heard of anything yet but fingers and toes are crossed. My understanding is that some types of calculations lend themselves to being enhanced by stream processor acceleration while other commands/calculations would not. It’s on the radar, but nothing to get excited about yet.