Fastest way to display Mesh Wires


I’m interactively modifying a mesh which is drawn in a custom DisplayConduit. Visualizing mesh wires, helps in better understanding of the mesh geometry. Unfortunately, with denser meshes the DrawMeshWires part takes a significant amount of time and slows things down considerably.

        protected override void PostDrawObjects(DrawEventArgs e)
            e.Display.DrawMeshShaded(Mesh, Brush.displayMaterial);
            e.Display.DrawMeshWires(Mesh, Brush.displayColor);

In the below example, I’m working with a 450K quad mesh. A purely shaded display takes approx. 30 ms to complete, adding wires ups the total time to about 450 ms. That’s a 15x slowdown!

Any hints on how to draw these more efficiently? Alternatively, maybe there are better suited display modes, which would aid users in perceiving changing mesh geometry without displaying any wires?

This is most likely due to the fact that the edges are not directly accessible on the Mesh data structure, and under water a MeshTopology gets used. Possibly that gets re-generated at each frame, as you are interactively modifying the mesh? If the mesh topology does not change, but only the vertex locations, you could maybe cache the vertex indices of each edge start and end, and draw lines with that instead.


Indeed, it is possible that DrawMeshWires forces the calculation of certain parts of the Mesh.

I guess in your case seeing the lines or seeing the dots would be equally readable.
I would advise you to make a copy of all the vertices of the Mesh in a PointCloud to draw.
And apply the same modification of the scuplt as for the mesh.

Maybe creating a second conduit would be nice, I imagine your video if the terrain had texture. The texture allowing good readability.
Lines or dots would not need to be visible.

If the goal is only to have good readability, maybe you can just apply a MatCap material to DrawMeshShaded.


Thanks @menno and @kitjmv!

My suspicion is, that it’s the sheer amount of lines required to represent mesh edges which slows things down. The overhead grows more or less exponentially with increased amount of vertices…

As mentioned earlier, my ultimate goal is to allow users to understand what modifications they are applying to the geometry. Point display seems like a good alternative. I tried directly displaying mesh vertices like so:

        protected override void PostDrawObjects(DrawEventArgs e)
            e.Display.DrawMeshShaded(Mesh, Brush.displayMaterial);
            e.Display.DrawMeshVertices(Mesh, System.Drawing.Color.Red);

but this doesn’t show any vertices. Do you have experience with this method? Only help on the forum I could find was this post, but I can’t find the C# equivalent of m_nMeshVertexSize

Having experimented a bit more, I found that rebuilding normals on the mesh helps a lot:


On the same 450k poly mesh, we’re down to approx. 40 ms (35 without screen capture) and it is fairly easy to understand what is happening while sculpting:

A good material would probably help even more. I’m thinking of using something like @Holo is working on for his awesome TerrainMesh plugin:

This is correct. Calling DrawLines may give you better results though that function in particular could use some optimizations.

Thanks @stevebaer, will give that a go. Any idea why DrawMeshVertices doesn’t do anything?

tried DawMeshWires(Mesh mesh, Color color, int thickness) by modifying the thickness argument
(I assume this is the C# equivalent)

If you can see the dots, but the performance is not good. Use a PointCloud.
I insist because when drawing OpenGL. GPU calls cost a lot of time.
I don’t know Rhino’s internal implementation, but I’m sure the GPU calls for PointCloud are optimized.

The wires are drawn correctly (albeit slow) but the vertices don’t. DrawMeshVertices doesn’t seem to have any overloads which accept vertex size as arguments.

Correct, sorry I confused DrawMeshWires

Did you try calling this without a call to drawshaded mesh? The shading may be getting drawn on top of the points

Yep, tried with and without DrawMeshShaded. In both cases the vertices are not visible.

To be clear, I’m calling this from a DisplayConduit in the PostDrawObjects override.

I’m not sure; I would need to reconstruct this in a test. That’s a super old function I added back in V5 and I doubt anyone even uses it since you don’t have any control over the point size. My guess is the function is using the current display mode settings for mesh point size which are probably zero.

Do you need an overload to this function that takes a point size?

1 Like

That’s correct. Changing the Mesh Vertex size in the display settings fixes the issue. I think it would make sense if there was an overload allowing us to define the point size independent of current display settings.

@stevebaer, is there a way to use a GLSL shader in a DisplayConduit? Something like the GL Mesh Shader component of the GHGL plugin but accessible directly in Rhinocommon without Grasshopper.

I’d like to recreate the MatCap material workflow described by @jho in this video.

DisplayMaterial seems to be a bit limited in this regard.

Yes, but it is a lot of work and will not work on Mac once we’ve switched to metal. Drawing shaders in a conduit is exactly what GhGL does. You can see the source for this at

1 Like

Thanks @stevebaer!

If I understood the logic correctly, the idea would be to do the following:

GLSLViewModel _model = new GLSLViewModel();

_model.DrawMode = OpenGL.GL_TRIANGLES;
_model.VertexShaderCode = ""; // code
_model.FragmentShaderCode = ""; // code

// is this how to add the mesh to the view model?
var uniformsAndAttributes = _model.GetUniformsAndAttributes(data.Iteration);

And then do this:

To actually display geometry in the viewport, the following code would have to be added to the DisplayPipeline:

Am I missing anything?

I would recommend compiling and debugging ghgl. Walk through what the code does in the debugger. There is a lot more to what gets run.

Also the component code that you are referencing is not one that you would use or need. GlBuiltInShaderComponent is a private component only used for testing and editing Rhino’s built in shaders. Look at GlShaderComponentBase

Just a comment on normals: normals refers to vertex normals which are computed by averaging the face normals of the faces connected to the vertex. So if you are going to update both normals and face normals, it makes sense to first update the face normals and then update the vertex normals.


Thanks @Terry_Chappell!
Swapped the two operations but can’t see any visual difference.

Interestingly, both face and vertex normals need to be computed for the mesh to display correctly. The order doesn’t seem to have any effect (at least in the case I’m working with).