I was curious about this so did some benchmarks.
System.Diagnostics.Stopwatch StopWatch;
Mesh QuadMesh;
Random Rnd;
Method 1: computes in 10-25ms on my average ultrabook
int w = 512;
int h = 424;
if (QuadMesh == null || Reset)
{
StopWatch = new System.Diagnostics.Stopwatch();
Rnd = new Random();
// Config
QuadMesh = new Mesh();
QuadMesh.Vertices.Capacity = w * h;
QuadMesh.Faces.Capacity = (w - 1) * (h - 1);
QuadMesh.Vertices.UseDoublePrecisionVertices = false;
// Vertices
for (int y = 0;y < h;y++)
for (int x = 0;x < w;x++)
QuadMesh.Vertices.Add(x, y, 0);
// Faces
for (int y = 1;y < h; y++)
for (int x = 1;x < w; x++)
QuadMesh.Faces.AddFace(y * w + x, y * w + x - 1, (y - 1) * w + x - 1, (y - 1) * w + x);
}
StopWatch.Restart();
for (int x = 0;x < w;x++)
{
for (int y = 0;y < h;y++)
{
int v = y * w + x;
QuadMesh.Vertices.SetVertex(v, x, y, Rnd.NextDouble() * 10, false);
}
}
Print(StopWatch.ElapsedMilliseconds.ToString());
MeshOut = QuadMesh;
Method 2: Updates the mesh in 8-15ms. This method is a great deal more consistent whereas the previous method fluctuates quite a lot.
Point3f[] CurrentVertexList;
int w = 512;
int h = 424;
if (QuadMesh == null || Reset)
{
StopWatch = new System.Diagnostics.Stopwatch();
Rnd = new Random();
// Config
QuadMesh = new Mesh();
QuadMesh.Vertices.Capacity = w * h;
QuadMesh.Faces.Capacity = (w - 1) * (h - 1);
QuadMesh.Vertices.UseDoublePrecisionVertices = false;
CurrentVertexList = new Point3f[w * h];
// Vertices
for (int y = 0;y < h;y++)
for (int x = 0;x < w;x++)
CurrentVertexList[y * w + x] = new Point3f(x, y, 0);
// Faces
for (int y = 1;y < h; y++)
for (int x = 1;x < w; x++)
QuadMesh.Faces.AddFace(y * w + x, y * w + x - 1, (y - 1) * w + x - 1, (y - 1) * w + x);
}
for (int x = 0;x < w;x++)
{
for (int y = 0;y < h;y++)
{
int v = y * w + x;
CurrentVertexList[v] = new Point3f(x,y,(float)Rnd.NextDouble()*10);
}
}
StopWatch.Restart();
QuadMesh.Vertices.Clear();
QuadMesh.Vertices.AddVertices(CurrentVertexList);
Print(StopWatch.ElapsedMilliseconds.ToString());
MeshOut = QuadMesh;
Method 3: Replaces array items one by one. This was the slowest method at 25-40ms
for (int x = 0;x < w;x++)
{
for (int y = 0;y < h;y++)
{
int v = y * w + x;
QuadMesh.Vertices[v] = new Point3f(x,y,(float)Rnd.NextDouble()*10);
}
}
So in this case Method 2 provided the most consistent fast results.
One very important thing to keep in mind is that the most taxing part of your pipeline is likely to be the display. Weāve found that itās rare to get Grasshopper running at more than 20-30fps while Rhinoās viewports are drawing (regardless of computation complexity). When Redraw is disabled the framerate dramatically increases. This effect is massively increased when working with NURBS too, that that should not be relevant here.
Of course running on better hardware helps, and so does avoiding 4K screens (in Rhino 6) as thereās not currently a way to run a lower resolution viewport (unfortunately).
An easy way to check for your scenario is just to put a stopwatch in your update loop. For example, in the example above (method 3) I get the following:

As you can see the computation time is 33ms, but the time between updates is significantly larger at 214ms (total). This is because of both rendering the viewports (with the large mesh) and rendering the Grasshopper canvas and Rhino window.
If I disable rendering of the Rhino viewport I get:

Which is clearly suggesting that 75% of the total frame time is going to rendering the viewport, with say 15% going toward rendering grasshopper, the canvas, and running the solver, and only 10% of that being the actual mesh computation. This makes logical sense to me, because Iām running on an iGPU and a 4K display and drawing a (new) mesh of 217,088 vertices every frame. (New in the sense that the GPU is updating the vertex list each frame).
So, TL;DR: Itās likely that a great deal of your overhead isnāt in the computation but rather in the display.