Rhino6 Cycles doesn't use GPU? stuck with CPU all the time

Hello All,

Just started testing Rhino6 Cycles with my multi-GPU machine, below is what I’m trying to achieve.

  1. Select a specific GPU for Cycles processing.

  1. Turn on the raytrace.

  2. it utilize CPU instead with GPU being Idle at 0%

Am I missing anything? below is my system info.

Rhino 6 SR1 2018-2-6 (Rhino 6, 6.1.18037.13441, Git hash:master @ 5a33e6871b94d32ba552468218cef0ad8d3d1263)

Windows 10.0 SR0.0 or greater (Physical RAM: 32Gb)

GeForce GTX 1080/PCIe/SSE2 (OpenGL ver:4.6.0 NVIDIA 390.77)

OpenGL Settings
Safe mode: Off
Use accelerated hardware modes: On
Redraw scene when viewports are exposed: On

Anti-alias mode: 4x
Mip Map Filtering: Linear
Anisotropic Filtering Mode: Height

Vendor Name: NVIDIA Corporation
Render version: 4.6
Shading Language: 4.60 NVIDIA
Driver Date: 1-23-2018
Driver Version:
Maximum Texture size: 32768 x 32768
Z-Buffer depth: 24 bits
Maximum Viewport size: 32768 x 32768
Total Video Memory: 8 GB

Are you actually looking at the “compute” parameter in that performance monitor to get the CUDA usage?

The second picture of the performance monitor seems to show GPU0 at 9% usage. I assume that’s the one Raytracing is using. I don’t think it uses more than one, but Nathan would know.

I’m just using win10 Task manager to see if my GPU processing a task.
also, GPU-Z is showing 0% GPU load. and only 4% TDP power consumption.

I’m very new to Rhino, considering switching since Cycles is very promising. The primary GPU is always busy from 0%-20% to render the viewport and display connection (even when I’m rendering on CPU only). so I don’t think the %9 load on the GPU has anything to do with CUDA compute.

I came from other render engines (VrayRT, iRay , Redshift) when I can specify a GPU for display and the secondary GPUs for raytracing, while keeping CPU low for UI and modeling tasks.

1 Like

@nathanletwory This looks like something you would be interested in.

That graph doesn’t show CUDA, you have to click on one of the graphs on the GPU page and switch it to “compute” to see that.

1 Like

Go to Tools > Options... » Cycles and make sure your device is set as the render device.

Then you can click Raytraced (Cycles) in a Raytraced viewport - it will show what device is being used

I just noticed that you already had your device selected. I missed that on my phone - sorry.

@tay.othman can you still verify that Raytraced thinks it is using your GTX 1070 by clicking on the product name in the viewport as I outlined in my previous reply?

Remember also that rendering a simple scene as yours won’t put your device under high load. You may want to try a more intricate scene with more metal and glass materials assigned to visible objects.

You can also select multiple CUDA devices to take advantage of more computing power. You should be able to time your 10000 sample renders with one and with two CUDA devices and see if you get twice as fast results…

edit: to further investigate please paste the output of the command _RhinoCycles_ListDevices. Its output looks like

We have 5 devices
	Device 0: CPU > Intel Core i7-7700 CPU @ 3.60GHz > 0 | False | True | CPU
	Device 1: OPENCL_NVIDIA CUDA_GeForce GTX 1060 6GB_04:00.0 > GeForce GTX 1060 6GB > 0 | True | True | OpenCL
	Device 2: OPENCL_AMD Accelerated Parallel Processing_Radeon (TM) Pro WX 9100_03:00.0 > Radeon Pro WX 9100 > 1 | True | True | OpenCL
	Device 3: OPENCL_Intel(R) OpenCL_Intel(R) HD Graphics 630_ID_2 > Intel HD Graphics 630 > 2 | True | True | OpenCL
	Device 4: CUDA_GeForce GTX 1060 6GB_0000:04:00 > GeForce GTX 1060 6GB > 0 | False | True | CUDA

Also, once you’ve selected the CUDA device and closed the options page, please find and attach the XML file %APPDATA%\McNeel\Rhinoceros\6.0\Plug-ins\RhinoCycles (9bc28e9e-7a6c-4b8f-a0c6-3d05e02d1b97)\settings\settings-Scheme__Default.xml (you can copy-paste this location in the address bar of Windows Explorer

HUD telling me that I have both GTX and Radeon Pro in use through OpenCL


Note that Task manager doesn’t properly tell GPU usage. It says 20-50% for GTX and 0% for Radeon Pro, but the GTX is almost maxed according GPU-Z and Radeon Pro is around 75%

1 Like


I confirm that it worked, Windows Task manager was incorrectly showing 0% GPU load while GPU-Z was showing a full load, It seems that windows doesn’t communicate well with Pascal GPUs.

@JimCarruthers…Thank you for your trick showing me to select the Compute_0 in the WTM, although I think I will rely on GPU-Z more.

after using cycles for a Day, this is a big thing for Rhino!

Thank you again and again.

My guess is that the task manager shows load that relates to UI / desktop usage, i.e. drawing of the desktop, application windows and their contents. Using Raytraced doesn’t do that directly - the load you see in the screenshot I posted is most likely where the Raytraced results are blitted to the OpenGL context framebuffers, and the framebuffers are being swapped, as well as the scene being drawn by OpenGL.

I am glad you are happy :slight_smile:


1 Like

Thank you, Nathan, again, I found myself working more and more with cycles.

Big Question, Is there any upcoming support for Volta Based GPUs? I received a Gift “Quadro GV100” and hopefully this is more usable with Cycles soon.

I have already added the necessary compiled CUDA kernels for the Volta architecture in the code branch that will be 6.4. It’ll be about a month from 6.3 release (probably this week) as final 6.4, but right with rhe first 6.4RC1 with 6.3 final.

I heard from @jeff that the GPU runs great.

1 Like

Yep…so far it’s working awesome on the GV100 I have here.

1 Like

Thanks a million for the good news and the quick responses.

@Nathan and @jeff… I just received 6.4 RC update. and I confirm GV100 works perfectly now!

Thanks a million for all the hard work you are putting in.

1 Like

Have fun (:

Newest versions of Rhino 6
6.12.18349.12551 and 6.12.18351.21491
do not seem to support CUDA 2.0 cards like GTX 580 and Quadro 6000.
In Cycles options, It’s not possible to choose anything but the CPU.
Is this by design or is it a bug in the CUDA implementation?
The last version I know that was working properly is 6.12.18320.05161

To be able to support the latest Turing-based cards I had to drop support for Fermi and older. Both the GTX 580 and Quadro 6000 are Fermi-based cards. Support for Turing was indeed introduced after 6.12.18320.05161.