@Holo You really need to understand what “shadow mapping” is, what it does, and how it works before trying to reverse engineer it by counting pixels. There are a handful of real-time shadow mechanisms (tricks), all have their pros, all have their cons. The mechanism Rhino uses is called “Shadow Mapping”…which consists of a 32bit depth buffer AND a grayscale image.
The depth buffer is basically the ENTIRE scene rendered from the point of view of the light source… This produces depth values relative to a position somewhere out in space.
Some background…
In order to render any kind of 3D frame, you need 3 things…
- A point in space (the camera or eye)
- A direction (where the camera is looking)
- A frustum that defines the viewable area
A directional light source (i.e. The Sun) only supplies 1 of those… The direction. It has no point in space (and no, you cannot consider where the directional light object was placed in the scene). This means Rhino needs to cook up the other 2 requirements. So where does the camera get placed to encompass the entire scene? …and… How big does the frustum need to be?.. And yes, it MUST encompass the entire scene… Why? Because objects outside the view’s frustum can cast shadows down onto and into the field of view… So even though you can’t see objects in the view, those objects outside the view can and will cast shadows onto objects you can see in the view…depending on light position relative to camera direction.
Given that…
The depth map…
Rhino already has tools that automatically compute some of the unknowns…it’s basically a Zoom Extents…which calculates all the near/far clipping planes, camera target and camera position… So what about the frustum? That’s where the “size” value you discuss above comes into play. The frustum is basically an NxN field of view… where N is the specified size. So a 4k x 4k x 32bit depth map yields 16MB (rhino’s default) …and a 16k x 16k x 32bit map yields 1GB. … Since these are stored on the GPU as textures, most GPUs/drivers today don’t allow for texture sizes larger than 16k, so that’s the current cap.
So we now have a direction, a point in space, and a frustum… The “light” is now acting like a camera, and the scene is rendered from the point of view of the light…producing depth values relative to the computed point in space. Hopefully it’s obvious that the larger the scene, the further out in space that point needs to be placed…and when you move the point further away, it means objects get smaller, and take up less space in frustum…less space means fewer depth values (pixels) generated for those objects. Fewer depth values means lesser details (i.e. fidelity goes down).
The grayscale image…
It’s just a bitmap that’s the exact size of the viewport’s window.
Once a depth map exists for a given light, the scene you see in the viewport is then rendered. When an object is “hit” by a view ray (for a given pixel), another ray is shot back towards the light source. If that ray intersects the light’s frustum, the light’s depth value is obtained at that intersection. That depth value is then compared with the distance from the light to the hit object… If it’s less than the hit object’s distance, then the object (pixel) is “in shadow”, and a black pixel is produced…otherwise a white pixel is produced…resulting in a black and white image that’s the size of the viewport window. This is really the “shadow map”…because it maps over the top of the viewport frustum, showing where shadows occur and where they don’t. It ends up being a grayscale image due to softening, blurring and multi-sampling techniques that try to eliminate the pixelization that can occur along the edges of shadows.
Once the shadow map exists…it is used by the rendering shaders to determine if a given pixel is in-shadow, and the appropriate shading is applied.
So that’s basically shadow-mapping in a nutshell…
The Pros:
- It’s fast
- It’s easy
- No massive triangle list needs to be created AND sorted
- It’s easily adapted to support multiple (infinite) light sources.
The Cons:
- It can be very memory intensive
- It creates self-shadowing artifacts (the depth map value represents the same object/pixel that’s being hit, so it thinks that pixel is in-shadow, when it’s really looking at itself)
- Its quality is scene and resolution dependent.
Having said all of that…
Your example above is only taking into account the object that you know (or want to) casts a shadow…but you’re forgetting about the plane it’s sitting on. Rhino has no way of knowing that you only want the red object to cast a shadow (well, it does, which I’ll mention in a bit)…so it has to take into account EVERYTHING in the scene when computing the light’s frustum, that includes the large planar object you’ve placed in the scene. Planes used as ground planes is just a bad idea all around…which is why the “Groundplane” object should be used…I know it has its own set of problems…so let’s not got there in this topic. Basically select EVERYTHING in your scene and run ZS (ZoomSelected)… And see how much screen real estate your red object takes up. Rhino’s shadow mapper does a better (tighter) job at this, but you get the idea. Here’s an example that I’ve done…
Red object on a large planar object…but I’ve zoomed in on the Red object…
Looks like crap! right?
That’s because the depth map really includes the plane and the red object…which would look something like this:
Everywhere you see red, is what gets used when determining “shadows”… and zooming in on the object has no impact.
That’s why these two object properties exist:
- Casts shadows
- Receives shadows
If you know that an object won’t be casting any shadows, or you simply do not want an object to cast shadows (or receive them), then you should turn OFF “Casts shadows”… What that does is it removes that object from the equation, which can/will allow for a much closer viewpoint to be used. Turning if OFF for the planar object in my example now yields this:
Much better, and I’m still at the default 16MB shadow memory usage.
This is exactly what the “Bubble” is supposed to do (although I’ll admit that I haven’t looked at that part of the code in many, many years, so I wouldn’t be surprised if it’s not working now)… But it basically discards all objects outside the bubble from the equation, and only objects inside the bubble are used… again, that’s how it’s supposed to work.
Lastly, the banding lines in your example look like “self-shadowing” artifacts… i.e. the plane is casting a shadow onto itself… This too can be controlled in the Shadows settings… which basically adds a bias in the comparison with the depth values mentioned above… It may be that we need to revisit that and change or allow for higher settings.
Sorry for such long winded reply… I will look what’s in store for V9, and see if we can do something about a single, dedicated light source (The Sun)… which can use a different mechanism altogether… Only worrying about 1 light source, for a single light type, is an order of magnitude easier than trying to support infinite multiple light types, all casting shadows…in different ways.
Hopefully that sheds some light on the topic 
-J