GTX Titan X - Viewport Performance?

The new Titan was released, called “X”. I am running GTX 770 Lightning 2GB by MSI, and one of my projects slows way down when everything is shown. [4770k@4.0GHz, 16GB CAS8 1866 RAM - moving to 32GB in a few days]

I am curious if the 12GB(!!) of video ram will help speed up viewport performance? Along with the faster GPU.

Any insight on video ram to viewport speed would be greatly appreciated.


I’ve used five or so GTX cards over the years with Rhino and currently have a Titan Black. You won’t see a display speed increase with the added video RAM in my experience. The GTX cards are still my pick however for Rhino as I see the least user reported problems with them and they’ve always served me well for the model sizes I work on. Some users swear by Quadro cards for the best display speed in Rhino but I prefer the Cuda core horsepower and price point the GTX line offers. If you use a Cuda enabled renderer either inside Rhino or externally, these cards rock. The VRAM will simply let you load larger render meshes on the card.

Thanks for the reply!

So what’s best for viewport performance with large model files?

I’m a graduate architecture student and I like detailing my whole building in rhino for easy documentation.

Is it processor? Speed of processor? Number of cores? Speed of RAM? Quantity of RAM?


There are many factors involved when looking at display speed. In general, the GPU and it’s driver are the most important. Also keep your poly count low for the overall render mesh or use per object render meshes. You can use the command PolygonCount to see how many polys you’re drawing. Display settings like isocurves or mesh wires can also have a big effect on speed so turn them off either globally for the mode or per object if they aren’t needed. Display features may also be a factor so don’t turn on settings if you don’t need them such as shadows or the use of technical modes. Of course the model itself is a variable and there are geometry issues that can slow or speed the display. Joining many separate meshes into one mesh is an example of a speed increaser while bad objects (selbadobjects) have been known to slow the display in cases.

If you have a specific model you’re trying to optimize for display speed, post the OpenGL details about your GPU, a screenshot of what you see now and the model itself.

Something I had discovered after this coversation was Holomark 2. Forum.

I decided for the upgrade to my studio desktop that I would go for the MSI 980TI Gaming. I upgraded from a 770 Lightning from MSI.

770 Lightning:

980TI (MSI Gaming) Overclocked@ 1487clock 7908mem:

I am working on a rhino model with 150,000 lines & 1400 surfaces, and the model moves like butter.

1 Like

Those mesh scores are impressive!
And it has a very nice boost to GPU_09 too!

And let’s hope McNeel finds a way to feed the Nurbs data to the card faster in the future, if they manage to remove the bottle neck you’ll see a similar boost to all scores (in theory).

Hi Brian,

I am looking at a new GPU setup, but was under the impression that Quadro cards give the best AA. Is this the case, or am I mistaken?

Considering a Quadro K2200 for general Rhino modelling and a GTX 980Ti for GPU rendering.

Would be good if GTX cards do just as well and I could omit the Quadro?


The 980 should have just as good aa as the quadro. nVidia has allways had good aa. AMD on the other hand… :frowning:

Thanks for that Jørgen, guess I’ve been working off the wrong assumptions!

I’d like to render in the background and keep working, so I’m guessing a dual card option is needed. From what I understand there are not massive performance gains across the Quadro range, so low-mid range Quadro’s are fine for even quite demanding tasks working in the viewport, is this generally the case with GeForce also?

The only experience I have with a Quattro card is my FX3800 (maybe 3700) in my work computer. I run it on two smaller displays (1280x1024 ish). My studio computer runs the 980ti on a 4K display (AA @4x). My studio computer crushes my work computer.

I have thought that a Quattro card would give me better performance, but I haven’t seen it in the numbers from Holomark.

I have debated getting a second workstation card (AMD W8100) for my studio computer… But at this point I’m not sure I see there will be an upside to doing it.

Interesting, thanks for that neobond.

I am not too sure if the FX3800 (I think I’ve got the same in an old workstation) is a fair comparison with the 980ti, but it is an eye-opener to me that there doesn’t appear to be any advantage from the Quadro’s.

Still, I will do some more reading as there is lots to go through on the forum.