The shadow map is a floating point depth buffer because that’s how shadow mapping works… The shaders perform depth table look ups within the shadow map to determine if a point on a piece of geometry is actually “within shadow” of a given depth buffer value…I won’t go into the technical details here, but you can easily find articles on real time shadow mapping and how it works.
As for your idea about caching only the viewport rendered results…Hmmmm, that’s actually pretty doable…my only concern is that I don’t think you can cache a static image that represents ALL shadows from all light sources and use it to get “correct” results. Shadows within shadows sort of thing… Technical display modes actually do something like this, and I know the results aren’t correct in many cases…but maybe it’s good enough while you’re tumbling the viewport around. Or perhaps I can store a light-shadow-texture per light per camera…all sorts of possibilities are coming to mind atm.
I’ll put it on my list for things to look at… Storing a viewport size texture for each light per camera/view is certainly more feasible than storing the shadow map for each light.
For future discussions, lets create some terminology here so that you and I both know what’s being discussed:
The “Shadow Map” - is the floating point depth buffer data generated when rendering the scene from the point of view of the light. The size of this item is determined by the Memory setting within the display mode’s “Shadows” section.
The “Shadow Texture” - is what I’ll call this new cached image that represents the shadow pixels within the viewport when rendering the scene from the camera using the “Shadow Map”… This can vary in size and is dependent on viewport size and not any of the Shadow settings. Therefore, any time the viewport size changes, I will need to regenerate the Shadow Texture…as well as any time a light position is changed.
Lots to think about on this now…Thanks for the distraction