RhinoCycles CPU @ 100%, even when GPU is selected

RhinoCycles_ListDevices shows

  • (0) CPU
  • (1) CUDA Geforce card (GPU)
  • (2) Network

I set the device by using
RhinoCycles_SelectDevice -> 1 (to select the CUDA device)

With this set, a regular cycles render (non-viewport) renders quickly, with SOME CPU usage. This is ok, although I’d rather see very little CPU impact when CUDA is selected.

But when I switch the viewport to Cycles, my CPU goes up to 100% (fans speed up). This is NOT good. I expect the rendering to be done on the GPU and leave my CPU mostly alone, when I select the GPU as the device.

Is there a tie-in missing between the settings for the viewport render, and a regular cycles external-window render?

I also noticed that the viewport render kept increasing in passes, way beyond what the cycles engine is set to go to, whereas the regular render stops at 200 samples, not matter what the engine is set to.

Is there a setting I’m missing somewhere to APPLY the settings to the engine? Just curious. :slight_smile:

Hmm, this may be due to the Cycles pushing its render results to the viewport. That is currently done over the CPU. Right now Cycles will be rendering forever in a viewport.

I hope I can tap into the system that would allow me to bypass that step and have the drawing being done all on the GPU.

There are several other solutions I’m working on to prevent constant CPU usage:

  • allow a maximum number of samples to stop rendering at when reached and no changes are made to the scene
  • possibly hook up the pause and play buttons from the HUD.

Hooking up the drawing code into the OpenGL system would be the nicest solution to have, but we’ll see how fast I can get that done.


Could you post your hardware configuration so I can have an idea of what Cycles is being used on?



Yea, I noticed that the pause and play buttons don’t do much yet.

_ListDevices gives

Device 0: CPU > Intel Core2 Quad CPU    Q6700  @ 2.66GHz > 0 | False | True | False | CPU
Device 1: CUDA_0 > GeForce GTX 480 > 0 | False | True | False | CUDA
Device 2: NETWORK > Network Device > 0 | False | True | False | Network

Running Windows 10 Pro, 64bit, on 8 GB RAM.

So right now the cycles viewport display mode uses the GPU to render it, and also the CPU pipeline to paint it? yea, that seems a little redundant.

Another optimization that I’ve seen is that when the user is dragging the mouse to rotate the view, the samples are set VERY low, like 1 or 2. (seen with NEON I think, and possibly in Blender) This way it keeps the responsiveness of the system very high while actively modifying the view.

Also, pass (sample) 1 seems to be very dark compared to pass (sample) 2. Not sure if there’s any way soften it, but it does create a bit of a jarring dark effect when grabbing the view to rotate it.

Just a few thoughts to add to your optimization pile. :grinning:

Thanks for all your great work!

It is indeed redundant. A result of how the integration is done currently, making it hard to use the OpenGL drawing code that Cycles itself has.

These two bits are exactly what I’m now working on. I hope to be able to tap into the original progressive refine drawing code that Cycles already has. And if not that at least simulate that with pixel data copying.


Sweet! I’m glad McNeel is backing the cycles integration, and that you’re happy to code it. Thanks for your hard work!