I didn’t had to do any renderings for quite some time. Now it’s time for that again. So I prepared a scene and wanted the pc to do a few renderings while I’m gone - which is a pretty common use case, I think.
With Rhino6 my only option to do so is to use a script or macro combined with “ViewCaptureToFile”, right? Now the problem is something I had observed way back in the WIP phase already. When using “ViewCaptureToFile” cycles doesn’t utilize the GPU (GTX1060) fully. Utilization jumps between 0 and 100% in varying frequencies depending on tile size and resolution. If I just use the viewport and set it to Raytraced - then the GPU is fully utilized, always close to 100%. Same resolution, tile size and samples of course.
I compared and the result is that when using “ViewCaptureToFile” the render time is much longer. In some cases render time is worse than double, again depending on the tile size and resolution.
The only tool that I’m aware of, which is able to correctly monitore the GPU utilization, frequencies, etc is MSI afterburner. Stuff like the new GPU monitore in the Windows10 taskmanager is useless. If you switch to the right graph (compute_0 in my case) you’ll see a curve that isn’t updated in a high enough frequency so it only shows a misleading, averaged curve. Everyone should be made aware that the Win10 taskmanager is basically useless in this case.
Now back to my actual problem:
How can I use batch rendering (or just ViewCaptureToFile) without being slowed down by this GPU-utilization bug?
Here is a quick screenshot from MSI Afterburner: