Is something wrong with my settings? RTX 2080ti slow in Rhino 6 with vray next gpu rendering? Wtf

Are there settings that I need to enable to get full benefit of this new rtx 2080ti card? I expected it to be fast, but it’s very slow when I select GPU in vray next window. Also, when monitoring the task manager performance tab, the GPU shows it only going upto 5%…seems like its not letting it use the cards full potential. What do I do to fix this so the GPU is used fully?
Running it in an AMD threadripper 2920x machine.

Slow compared to what?

Task Manager doesn’t show CUDA utilization unless you select it specifically, or use a 3rd-party CPU monitor.

Slow compared to the CPU rendering. This is a RTX 2080ti card. It should not be 10x slower than the cpu, should it?
And what settings must I have enabled that Im probably missing?

Is it possible that my nvidia control panel settings are wrong? I see a “manage 3d settings” which tons of options… is it in this panel where I configure the card? I have no idea…

It would be a CUDA app, none of the control panel settings have anything to do with it, there are no settings to set. The CUDA core are there, they get used, that’s all there is to it.

Is it ACTAULLY 10X slower or is it just the number you get on their benchmark? The way the CPU and GPU renderers work is totally different, GPU raytracing is much more brute-force intensive, I wouldn’t be surprised to be a bit disappointed at the performance of one 2080Ti vs a monster Threadripper in a not-apples-to-apples comparison.

Its much slower, I don’t know if 10x, but at least 3-4x…also, why wouldn’t the task manager gpu show the gpu being used at all? Strange? Im disappointed because it doesn’t feel that much faster than my 4 year old mid-range laptop. Something is not right with my settings.

Lastly, when I run the gpu settings, it doesn’t render some of the lights in the scene, whereas CPU does. :frowning:

Because Task Manager is kind of dumb. You CAN set it to show what it’s doing, you have to switch a graph to “compute” or “CUDA,” all the default graphs are about video processing and OpenGL/DirectX.

The GPU and CPU renderers are different, they support different features, and you need to optimize your scenes differently depending on what you’re going to use to render it. I use iRay with 4 1080Tis, and the point is NOT that I couldn’t possibly get a similar result about as fast on a CPU renderer like Brazil if I spent days optimizing settings, the point is very realistic rendering without fussing about with optimizations.

Thanks Jim. I didnt know that.
Seems to me like a big waste of money buying the 2080ti if the speed is so slow. If I don’t figure out what Im doing wrong, I’ll downgrade. Whats the point of the card if it’s not rendering efficiently? I really dont get it.
And as for different settings in vray when using gpu rendering, I guess I have to trial and error it? An entire emmissive light panel is dark in GPU mode. So strange. I have to play around with it.
But I still think my settings must be wrong. This card can’t be this slow. No way.

I use V-Ray GPU rendering a lot now and I like the speed in comparison to my DualXeon (32x3,2GHz). My impression was one 1080ti is so fast like my old DualXeon.

I was quite disappointed by the fact, that the new V-Ray based on the “new approach” without subdivs was quite slow. So, a well adjusted LC+IM setup allow me to render an interior scene in high res in 20min in the past. Jumping from LC+IM to LC+BF cost some render time. So, one 2080ti in BF+LC allow to render my interiors in ~20min again. Old speed, but the detail quality is very nice, no splotches anymore and fine details are kept. Finally I bought two 2080ti and get my complex train interiors in approx. 12min now. If I render per 2x2080ti and 1x1080ti than I get approx. 10min. But most I use the 1080ti for system/Rhino/display and the 2x2080ti for rendering only. So I get the best stability. If something should crash, than not the display.

Also I found the GPU mode is faster if I disable the progressive mode. (min/max subdivs 1/100 works for quite universal).

The Hybrid-Mode doesn’t help so much, so my DualXeon could help to save 10% of the render time most, but it’s not worth for me - I get heat problems for the CPU since the GPUs are heating to much.