Rhino.Display.ZBufferCapture class usage

I tried to build Z-depth map of a view and to export floating point data to a binary file.
I am using Rhinocommon and c# for the project.
A part of the code for illustration is:

Rhino.Display.ZBufferCapture zb = new Rhino.Display.ZBufferCapture(viewport);
Single[] sa = new Single[xsize * ysize];
sa[iy*xsize + ix] = zb.ZValueAt(ix, iy);

It works fine but there are two problems:

  1. Data from ZValueAt() is scaled to 0.0…1.0 and can’t be compared to other exports even at the same object in different view (camera) position.
  2. Even the view size can be set to arbitrary value (for example 4000x4000) only the area of the screen (in case of HD screen approximately 1920x1080) is filled with a data.
    I can’t find any example and documentation about usage of Rhino.Display.ZBufferCapture class.

Does anybody have experience with it?
Thank you in advance

1 Like

the 0.0 to 1.0 zero range adapts to the visible geometry bounding box, so it changes when you change the view. Then the 0-1 range is not linear, so you cant just map it into space even if you would get the proper data.

This is related to a common problem - when you switch the view to zbuffer (forgot the command) and make a bigger viewportcapture (rhino command) every patch will have different grayscale ranges.

All in all you probably will be happier just running a few million ray intersections with the geometry while having a coffee break. And best use the rendermeshes instead of the nurbs for intersection - because the coffeebreak could otherwise become a vacation…

side note:
It is similar to rendering (not in Rhino) where you can (painfully) extract data from a zbuffer pass, but everyone is using the position pass for that (rgb = xyz)…

Thank you Atair,
I understand all what you say.
Till now I am creating a lines and is looking for the first point of intersections – just what you say, but one coffee is not enough :-). it runs all the night and this in the case of converting nurbs to meshes.
If this is done with z-depth from ZBufferCapture It is take about 15 seconds. The result is acceptable for small objects. For big one the problem is the resolution of the bitmap limited from the screen resolution (problem 2).
Have you any idea how to extend the bitmap to 4000x4000 for example?
Thank you once again for the detailed answer

no ideas for the zbuffer way, but some tips for the intersections:

  • use Intersection.MeshRay it gives you the first intersection (fast)
  • join the meshes into one if you can

this code:

      for(int i = 0; i < 100000; i++) {
        Ray3d ray = new Ray3d(new Point3d(0, -100, 0), Vector3d.YAxis);
        double result = Rhino.Geometry.Intersect.Intersection.MeshRay(InMesh, ray);
        if(result > 0) {
          Point3d hitPos = ray.PointAt(result);

runs 3.2 sec per 100k intersections with a 100k vertices Mesh, so 30 sec per million ‘pixels’ - not that bad…

edit: depending on the situation, a bounding box - ray intersection before the meshes (in case you don’t join them) can help a lot.

Thank you Atair,
I am using

Point3d[] Cross3d = Rhino.Geometry.Intersect.Intersection.MeshLine(mesh, line, out facelds);

Following you suggestion I changed it with

double result = Rhino.Geometry.Intersect.Intersection.MeshRay(InMesh, ray);

Unfortunately, the execution time is the same.
After that, I checked the number of meshes. It is about 400. I joined them and again there was no acceleration.
May be for other projects it is faster.
Anyway thank you for the ideas!

Hi @gstoilov,

I don’t believe the z-buffer is going to help you determine intersections of objects.

Perhaps you can provide some background on what you are trying to do and why?


– Dale

Hi Dale,
Thank you for the answer here
I’m trying to build Z-depth map of a view and to export floating point data to a binary file.
Excuse my not perfect English. I will try to explain the idea in more details.
I need a Z-depth map (2.5 D) of an object or scene designed in Rhino3D for processing the data outside the Rhino3D. The resulting “image” should have dimensions about 4000x4000 pixels and more and floating-point precision.
Previously I was using a vertical line and calculating the cross point with a maximum Z coordinate of this line and every object in the scene. Moving this line across X and Y coordinates and finding Z of the cross points I achieve required Z-depth map. This algorithm works fine, but it’s very slow if the object is nurbs object or the mesh with many nodes.
I tried to use the functionality of z-buffer class and the result is excellent for small objects when the output “image” ( bitmap if using command _ShowZBuffer) is smaller than the resolution of the viewport size in my case full HD 1920x1080.
Best regards


Maybe you can get a speed gain by using a workflow like so:

  • get all the meshes from the geometry
  • you can join all meshes into one for ease of bookkeeping
  • for each mesh vertex, calculate the distance to camera point or camera plane ( depending on the desired projection)
  • for each vertex calculate the normalized depth value
  • apply vertextcolors with grayscale value based on normalized depth
  • hide all document geometry and add the mesh with vertex colors
  • make sure you have a displaymode that displays the mesh with vertexcolors correctly
  • capture the viewport at desired size.

Does this make sense?
It might still be slow but I suspect it might be faster than sampling the mesh for every point
You could maybe optimize by culling faces that point away from the view


a crude python example:

import rhinoscriptsyntax as rs
import scriptcontext as sc
import Rhino
import System

def depth_mesh():
    #assumes meshes present in file
    mesh_id = rs.ObjectsByType(32)[0]
    mesh = rs.coercemesh(mesh_id)
    vertices = mesh.Vertices
    depth_colors = System.Array.CreateInstance(System.Drawing.Color, vertices.Count)
    max_distance = 80
    plane = rs.WorldXYPlane()
    for i,vertex in enumerate(vertices):
        distance = plane.DistanceTo( vertex)

        normalized_depth = distance/max_distance
        if normalized_depth > 1 : normalized_depth = 1
        if normalized_depth < 0 : normalized_depth = 0
        gray_value = int(255*normalized_depth)
        color = System.Drawing.Color.FromArgb(gray_value,gray_value,gray_value)
        depth_colors[i] = color
    vertex_colors = mesh.VertexColors


1 Like

Thank you,
I will try this approach.
I will code floating point distance in 24 bit color.

Let me know how it goes, I’m curious about the speed gain,

Good Luck

Hi Willem,
I have tried your approach, but the problem is that I need more than 256 colors. I coded more then 256 levels as 24 bit color. Rhino changes each color component as 8bit color altogether but not separately as low and high bytes.
So I can’t say anything about the speed.
Thank you for the idea

1 Like

can i ask what is the goal? the process and its problems are clear now, but i am curious as to why you would need this data?
…because maybe people here have some ideas

Thank you for asking
We are measuring deformation of the objects in different force load or temperature change.
For the software, we have created, the input data is 2.5 D and because of small deformations, we need floating-point data.
Probably many commercial programs can directly use OBJ or DXF files, but we need to test and work with ours. (It’s our child :wink: )