Compared to normal viewport Raytraced, the CaptureViewToFile is still slow…
Anyonelse sees this?
Setting output as viewport size and no checks in optional stuff like transparent background so should
be same as viewport raytraced.
(If sampling done in viewport of 1000 then, CaptureViewToFile set to 1000 samples is instant, so there should be no reason why it’s so slow…)
Work around is just running on viewport then saving it with same or lesser sampling, so it’s ok… but would sometimes just having running and file saved.
I recall you are on a 4K screen. That means that the viewport most likely has a more grainy version (pixel size 2). Capturing the view will do that on pixel size 1, as a result it will rerender everything up until the set sample count.
Do you mean the default viewport DpiScale setting?
But, in that case when doing ViewCaptureToFile, after sampling is finished to the same sampling number of below would save a render which is not as high resolution as just doing ViewCaptureToFile?
It that case though, I guess the quality of render of ViewCaptureToFile is quite dependant on the DpiScale parameter… which I guess should be notified in the UI.
As doing ViewCaptureToFile is instant to save, if sampling number is same as that of the viewport… currently.
Ah yes, so when you are doing a capture with sample count set to 1000 in the capture dialog, but your viewport is still not on 1000 the rendering will be redone indeed. I don’t have a good solution for that.
Current logic:
viewport set to 1000, currently at 200, start capture with 1000: start a complete new render session
viewport set to 160, currently at 160, start capture with 1000: start a complete new render session
viewport set to 1000, currently at 200, start capture with 150: instant capture of current status in viewport, as required (minimum) count has already been reached
viewport set to 1000, currently at 1000, start capture with 1000: instant capture, as required count has already been reached.
ふAll makes sense, if in all cases the image saved is of equal quality.
For the below two cases, doesn’t the render quality be lower, if in 4K displays the viewport is more blocky?
It seems the Capture function has two rendering output quality, one done from scratch (higher quality but more time) and one just saving what is shown in the viewport(quality dependant on how rough the viewport is set).
Otherwise, I don’t see how sampling on viewport then save is fast while capture from scratch takes morn time.
Maybe I should test the output quality and size of
images for both cases…
I guess there is a tradeoff to be made here, and I am assuming users want the full resolution as end result. But I don’t think the DPI settings matter in these cases. It is about requested samples and samples ready in viewport.
I’m looking to do the same thing and discovered that ViewCaptureToFile does not use the GPU when it does rendering outside the viewport. Could you check and see if this is the reason for the performance decrease? - Could be a bug.
Yes, Raytraced view is set to use only the GPU (Cuda). I can confirm it uses the GPU in the Windows task manager. (below)
The issue is that when I initialize ViewCaptureToFile, I see in the task manager that the GPU is not used at all. (below) This likely why it takes so long.
The taskmanager is not a good way to check GPU usage. While using Raytraced in the viewport you can click on the namb Raytraced (Cycles) it should show the device that is being rendered with. If it is using the GPU it should show that GPU (something like Raytraced (Cycles@Radeon (TM) Pro WX 9100) or Raytraced (Cycles@GeForce GTX 1060 6GB), if it is using CPU it will say so (something like Raytraced (Cycles@CPUx4)).
The very same rendering device will be used for ViewCaptureToFile and ViewCaptureToClipboard. Doing so won’t use the UI, so taskmanager will show 0% or close to that, since no UI is being updated. To properly check if your GPU is used you should be using GPU-Z instead. This will give much better idea of GPU usage. Now, if you are rendering to a (much) larger resolution and possible to a higher NumberOfPasses count then it will indeed take as long as it need to take. When capturing view on higher resolution or to higher pass count you should consider setting the viewport maximum samples count to 2 or 3. That way you won’t be rendering two scenes at the same time, which will take extra long time.
Thank you! The GPU-Z program is great. I set up a test scene and let the viewport render to 1000 samples. - It took a little more than a minute and used the GPU constantly.
Then, I set the viewport sample count to 3 and moved the camera a little, letting it settle. After that, I set up the same task (1000 samples) at the viewport’s original resolution with CaptureViewToFile.
The GPU-Z load profile shows some strange behavior. - Any idea why it’s throttling like that?
Hmm, not sure why it would do like that. I’ll have to check the code again, but it is supposed to render all samples until the end, and only then write results to the framebuffer. What is the output of Help > System Information? Could you paste that here please?
I think this may be due to having twice the same scene in memory. The capture now pages regularly in memory from host to device which can take a while.
Overnight, I performed a raytraced ViewCapture at 5000x3000px with 1000 passes. The output (.tiff) was a single color. (below) Could this be because the output resolution was too high? I just did the same raytraced ViewCapture with 23 passes and the output was the same. Tried again outputting to .bmp and the output was also the same.