Rendered GI slower than expected in fullscreen

Hi guys, what’s causing the more than 4x slow down on Maximized?
I have a simple scene with 4 meshes that aren’t heavy and in normal 4 view layout I get a TestMaxSpeed of 120 fps and if I maximize it drops down to 17 fps.
I could expect it to drop down to 30 ( 1/4 of 120) as it is 4x more pixles to calculate shadows for, but 17 is just 1/7 of the initial speed.

This is on a machine with a 1070 GTX and on a 2560x1440 screen, so not even 4k. Of course it is much smoother in real life as TestMaxSpeed doesn’t reduce the shadow quality, but i think it is important to look at why the slow down isn’t linear.

I see similar results on my laptop where rendered mode is useless in maximized as the graphiccard isn’t powerful enough even with the automatically reduced shadows, that is on a 2880x1800 retina on the geforce 750 card. There we would need over all reduced resolution by 4x and even 8x or 16x when manipulation, not only shadow reduction.

Thanks for looking into it as this will affect 4k monitor setups even more.

Is this with Rhino WIP?

Sorry, forgot to tag the post
Yup, that’s GI shadows and subD meshes there :wink:

SSAO is something for @DavidEranen

@Holo We have been looking at this lately and we are aware that we might need to change the quality if running on high-dpi displays, or maybe even change the default quality on all systems. For 4k displays the effect is a bit of an overkill. Rest assured that this will still be adjusted.

Regarding the non-linear increase in render time, it’s anyone’s guess. My guess would be that the amount of data processed per pixel becomes much larger than the GPU caches can handle and therefore disproportionately more waiting for data to be fetched. But it’s just a guess.

-David