Are there settings that I need to enable to get full benefit of this new rtx 2080ti card? I expected it to be fast, but it’s very slow when I select GPU in vray next window. Also, when monitoring the task manager performance tab, the GPU shows it only going upto 5%…seems like its not letting it use the cards full potential. What do I do to fix this so the GPU is used fully?
Running it in an AMD threadripper 2920x machine.
Slow compared to what?
Task Manager doesn’t show CUDA utilization unless you select it specifically, or use a 3rd-party CPU monitor.
Slow compared to the CPU rendering. This is a RTX 2080ti card. It should not be 10x slower than the cpu, should it?
And what settings must I have enabled that Im probably missing?
Is it possible that my nvidia control panel settings are wrong? I see a “manage 3d settings” which tons of options… is it in this panel where I configure the card? I have no idea…
It would be a CUDA app, none of the control panel settings have anything to do with it, there are no settings to set. The CUDA core are there, they get used, that’s all there is to it.
Is it ACTAULLY 10X slower or is it just the number you get on their benchmark? The way the CPU and GPU renderers work is totally different, GPU raytracing is much more brute-force intensive, I wouldn’t be surprised to be a bit disappointed at the performance of one 2080Ti vs a monster Threadripper in a not-apples-to-apples comparison.
Its much slower, I don’t know if 10x, but at least 3-4x…also, why wouldn’t the task manager gpu show the gpu being used at all? Strange? Im disappointed because it doesn’t feel that much faster than my 4 year old mid-range laptop. Something is not right with my settings.
Lastly, when I run the gpu settings, it doesn’t render some of the lights in the scene, whereas CPU does.
Because Task Manager is kind of dumb. You CAN set it to show what it’s doing, you have to switch a graph to “compute” or “CUDA,” all the default graphs are about video processing and OpenGL/DirectX.
The GPU and CPU renderers are different, they support different features, and you need to optimize your scenes differently depending on what you’re going to use to render it. I use iRay with 4 1080Tis, and the point is NOT that I couldn’t possibly get a similar result about as fast on a CPU renderer like Brazil if I spent days optimizing settings, the point is very realistic rendering without fussing about with optimizations.
Thanks Jim. I didnt know that.
Seems to me like a big waste of money buying the 2080ti if the speed is so slow. If I don’t figure out what Im doing wrong, I’ll downgrade. Whats the point of the card if it’s not rendering efficiently? I really dont get it.
And as for different settings in vray when using gpu rendering, I guess I have to trial and error it? An entire emmissive light panel is dark in GPU mode. So strange. I have to play around with it.
But I still think my settings must be wrong. This card can’t be this slow. No way.
I use V-Ray GPU rendering a lot now and I like the speed in comparison to my DualXeon (32x3,2GHz). My impression was one 1080ti is so fast like my old DualXeon.
I was quite disappointed by the fact, that the new V-Ray based on the “new approach” without subdivs was quite slow. So, a well adjusted LC+IM setup allow me to render an interior scene in high res in 20min in the past. Jumping from LC+IM to LC+BF cost some render time. So, one 2080ti in BF+LC allow to render my interiors in ~20min again. Old speed, but the detail quality is very nice, no splotches anymore and fine details are kept. Finally I bought two 2080ti and get my complex train interiors in approx. 12min now. If I render per 2x2080ti and 1x1080ti than I get approx. 10min. But most I use the 1080ti for system/Rhino/display and the 2x2080ti for rendering only. So I get the best stability. If something should crash, than not the display.
Also I found the GPU mode is faster if I disable the progressive mode. (min/max subdivs 1/100 works for quite universal).
The Hybrid-Mode doesn’t help so much, so my DualXeon could help to save 10% of the render time most, but it’s not worth for me - I get heat problems for the CPU since the GPUs are heating to much.
Why you guys bother about GPU rendering in Vray.? Vray GPU render sucks. Its not stable. Its not accurate. It not work the same way as CPU.
There are better GPU rendering solutions on the market.
The Chaos Group present their GPU rendering videos working in many GPUs in SLI mode. That is why in all youtube videos we get excited about gpu potencial. But most consumers wont have the possibility to have/buy/spend 20,000 dollar on video cards.
Its better to have a powerful CPU and render in there. (Threadripper…)
Or have give a try on other softwares. Like, Octane / Thea Render / Redshift / Clarisse / Arnold.
Octane and Thea Render have full integration in Rhino. Their are great. Thea is realy simple to learn and works like charm with nvidia cards.
I have using vray at my architecture office and i have to say that more time i spend in vray i realise that vray is to much time consumer to get a great image. But this is my only opinion.
KeyShot 9: Support for NVIDIA RTX Ray Tracing and AI Denoising Coming Soon
For me V-Ray GPU works very good. Very stable, nearly no crashs, detailed and accurate renderings, also working for complex scenes. I use it every day for projects and love it.
Sorry. But here in version 3.60.03 for rhino 5 Sr14 isnt stable at all. To many lags. Many crashes. GPU wont render glass over textures. Glitches and small artifacts.
We migrate to the latest one, but then, we work with files alocated inside server desktop and our textures was missing. I had to downgrade.
I use the latest VfR Next and it’s great. I skipped VfR 3.x since it was not made for me, to young and to much problems everywhere. But VfR Next is at a new level, every update of the last weeks was a big step forward.
If you have problem to render your scenes from 3.6 than talk with the support and I’m sure they will look for a way to get it run.
Hi Architex,
I couldn’t understand. You mean textures are visible in Rhino and not rendered in V-Ray ? V-Ray comes with a file path editor to quickly remap missing files if that iste case.
Sorry for the delay.
What i was try to explain is, here at our Architecture Office we have tested new vray version for Rhino. And what we found was files saved in latest Vray for Rhino wont load any textures when open in previous vray version. Textures missing even in path editor.
Resumé.
Open files saved from Latest Vray for Rhino in vray v.3.60.03 wont load any textures. Textures are missing all over the files. We have contacted Chaos support and they send us to “hell”. Their stupid answer for a solution was, we need to migrate for new versions or stay in our versions. No solution for it.
So we argue that we are a architecture office that have many complex files, some of them linked between, and external references from blocks and mostly, our big textures and objects library.
How can we migrate from vray versions if we can not load textures. This is stupid.
Hello,
V-Ray for Rhino is not forward-compatible, meaning projects done with V-Ray Next will not be read correctly with V-Ray 3.6 or older versions. If a project is opened using an older V-Ray version than the one it was created with a prompt message will offer wiping all V-Ray data to ensure file stability.
If you wish to keep a specific project file compatible with v3.60.03 you can keep a copy of it as it is and the migrate the original to v4.10.01. In case you want to reuse parts of old projects without modifying them, simply copy what’s needed to a new project and then do not save the source files when the job is done.
Be advised that V-Ray Next for Rhino licenses also work with V-Ray 3.6, allowing anyone who purchases an upgrade to complete unfinished projects using V-Ray 3.6 if needed.
For more info, visit https://docs.chaosgroup.com/display/VNFR/Migrating+from+Previous+Versions
If you have any questions related to V-Ray or have stumbled upon any technical difficulties while using it, please let me know!
Kind regards,
Peter
Im trying the octane plugin for rhino and it seems promising, but it’s buggy. Have you had any luck with octane and is it possible to round trip from rhino to octane standalone and back?
Thanks
How did all this work out for you? Did you run the octane benchmark program?
I thought i wasn;t happy with octane but i am extremely happy now that i squared away the ‘issues’!
My 1070 card was overclocked and it was problem. turned on debug mode in the nvidia control panel and voila…
To view GPU usage you can use the following:
rivatuner, process explorer, process hacker 2, rainmeter, MiTeC Task Manager DeLuxe.