Vray 3.6 - GPU + CPU

Please can some explain the new GPU & CPU… I know it sounds obvious but results can be different.

Surely you will always want to use the GPU & CPU to get render done faster?

Thanks
Rich

Simply if you use cpu + gpu ( hybrid render) You send cuda/opencl instructions tu cpu too.

it’s like using vray rt, but the cpu also works in the calculation.

The difference in render result is less and less with the new versions, but there are some functionalities not yet supported.

Did someone check the GPU usage?

I get very low usage in GPU only (~50%) and GPU/CPU (5% GPU/ 70%CPU) mode.

(GTX1080 + DualXeon)

Well, if it’s like other GPU renderers, you don’t actually want to use the CPU since turning on the extra overhead needed to manage coordinating the CPU and GPU will(I guess depending on your GPU(s))swamp whatever contribution the CPU makes. That’s been my experience. But, there may be certain features that just don’t work on the GPU yet so it has to fallback.

Ok still confused… this is all about maths and speed … we just want the render done as soon as possible… so surely gpu and cpu will get this sorted quickly together… or am I being a bit stupid?

I don’t know how V-Ray does it, but for instance with Raytraced (currently) the speed is determined by the slowest device in a combo. So even if you’d have two GPUs, say a GeForce GT 420 and a GeForce GTX 1080, the speed you’ll see is close to twice of the GT 420 alone - the GTX 1080 would be sitting idle for quite a bit. I imagine the same is happening here. When you have multiple devices where there are clear speed differences you’d probably be better off using only the fastest, especially if the faster device is MUCH faster than the slower (at least twice as fast).

Hmm that’s weird i use Thea and there’s no chance for idle or 50% uasge theres always full power drain and setup doesn’t matter u can use for eg. kepler and pascal cards and both will be used at 100% compute power at the same time.

If an engine has work-stealing implemented it can work indeed. Raytraced probably will have in the future, but not at the moment. Note that I have no idea how other engines do this, I was merely posing what I guess could be the case.

Yes sure i’m just bit surprised that other engines have problems with hybrid rendering while thea have this since years. Anyway i can’t wait to put my hands on v6 then i will test cycles with pleasure :slight_smile:

V-Ray already has a very robust distributed rendering system, it can use hundreds of nodes to contribute to a render without much overhead.
In the GPU+CPU mode, it use the GPU render engine and has the CPU execute CUDA code to act another nVidia GPU.
I did a simple test:
GPU only : 1m43.3s
GPU+CPU : 1m5.3s
about 37% faster if I have the math right.

2 Likes

And so your GPU is hot running, if you don’t use a GPU monitor, so you can hear the fans running?

I used the nVidia tool “NvGpuUtilization” and it was at 100% most of the time

Sounds good. Thank you.

Here is my test I made in Vray 3.4 for SketchUp. I think in Rhino3d it should be similar.

eea48d608354026449742f40af9078f10c4ab66e_1_690x388

2 Likes

I wonder if it’s still valid in v3.6.

I will make test with pictures tomorrow. But in my opinion 3.6 is faster and there is no difference in brightness with different settings. Picture from CPU and GPU looks the same.

And here are images with time stamps.

1 Like

cool model, is this available to download or buy ?

roju, what CPU and GFX do you use?

Thanks a bunch
Jonas

Here I tried to maintain the same noise level/image quality, but GPU+CPU could be a bit less noisy and the denoiser less aggressive, but it shows the value of these settings:

1 Like