Can any one please explain this : ShowZBuffer result phenomenon to me


#1

I am working with the ShowZBuffer comand in version 4, to produce grey scale heightfeild images of simple forms. I have noticed differences in the image quality it produces. Some times I get a more blured image with more levels of grey and some times a sharper flatter image. I tried to figure out why this was happening and I noticed as in the pics posted that a mesh of the object will give the more blurred image and the original polysurface will give the flatter version. The strange thing is this is not always the case! and some times (more often than not) I cant produce the blurr type image with the more shades of grey that I prefer. I only run the command with the one object visible on the screen. I have plugged away at this for a while but cant figure out what is going on. Any one have an idea ??? Thanks!!


(Willem Derks) #2

I cannot test anything right now as I’m on my phone,
FWIW: try what happens when when you split the polysurface halfway and hide the lower part. iirc Zbuffer is based on the depth of all visible geometry maybe with the mesh the bottom part is (already) culled and thus not taken into consideration.

-Willem


#3

Thanks for that suggestion Willem. Tried it but it did not work.
Interestingly I did notice that when I view the zbuffer of both shapes visibel in the scene at the same time(both lying flat on the construction plane) I get the flat look as when I just view the zbuffer of the poly surface but as soon as I remove the polusurface object from the scene an just view the zbuffer of the mesh I get the “smoothed blurr” look that I want. As I mentioned before though I dont get this look with all mesh objects and that is what I would like to do.


#4

I think the difference may be caused by having a trimmed surfaces in the scene. Looks like Rhino’s ShowZBuffer is calculating the depth based on the entire surface definition, not just the visible trimmed part. So if you use extracted render mesh, it is truly just the mesh depth, with trimmed surface/polysurface, the depth takes into account the not visible, trimmed away part. Tested it over here with a piece of sphere and it seems to confirm that.

Can you try running the ShrinkTrimmedSfr command on your original PolySurface and see if it changes your ZBuffer look?


(Wim Dekeyser) #5

OK - that would explain why (by the looks of the images posted here) the NURBS version doesn’t seem to use the entire black to white range, whereas the mesh version does.
But there must be something more going on since it doesn’t seem to ‘work’ for all mesh objects.
(unable to test anything at the moment)


#6

i guess in many cases there will be no difference between mesh and NURBS if the surfaces are not trimmed or have been shrunk.


(Pascal Golay) #7

Hi Flubber- can you post the file you’re testing with?

-Pascal


#8

file for forum.3dm (289.8 KB)

Here are the shapes. Derek Thanks!, yes that was it about the trimmed surfaces. (But See Below?) Also I think (and maybe some one can confirm this) the reason I thought that I was also getting a “flatter” look to some forms in the zbuffer (view was because I was comparing them to the pyramidal trapezoid form I posted above. This seems to be a type of form that has a particularly fuzzy look to it when vied in zbuffer and I now think it may just be an appearance thing of the shape of this form?

But interestingly if you look at the file above and view the mesh or shrink the trimmed surfaces from the poly surface (Derek’s fix). Then view one of the forms in zbuffer in a top view you will see the fuzzy affect. Now while still in zbuffer view, change the view port to perspective view and the fuzzy affect will disappear, now while still in perspective tumble the object so you see it dead on from the top. Now click perspective view and you will see it change from a flat look to fuzzy. This still puzzles me?


#9

@Flubber - you are right - in perspective projection looks like the depth gradient is somehow related to the lens length you are using whereas in parallel projection the geometry furthest-closest points are used. Please take a look at a quick comparison test - Top view switched to Perspective Projection with various Lens length (and ZoomExtens each time).
Not sure why, but that’s how it seems to work.