Update Rhino geometry in real time from a C# component

Hi everyone,
I am working on a C# component, which would stream Kinect point cloud data to Rhino. Currently I am triggering a SolveInstance every 20ms and outputting a new List of Point3d on each tick. This creates an overhead, though, since each time approx. 100 000 points have to be initialized. The C# part timed with a StopWatch takes roughly 15-20ms, while the overall computing time takes up to 250ms.

Is there a way to initialize the points once and subsequently only update their X,Y,Z coordinates? Any other ideas on how to reduce the overhead?

Relevant bits of code:
pManager.AddPointParameter(“Pointcloud”, “PC”, “Pointcloud”, GH_ParamAccess.list);

pointCloud = new List();

DA.SetDataList(0, pointCloud);

base.OnPingDocument().ScheduleSolution(20, new GH_Document.GH_ScheduleDelegate(ScheduleDelegate));

4 Likes

Hi @mrhe, I’m the author of Tarsier (source code) - which attempts similar goals to what you’re working on. I have a few suggestions that may help improve your performance.

  1. Output a list of GH_Point instead of Point3d. You can do this with Linq, DA.SetDataList(0, pointCloud.Select(x => new GH_Point(x));. As a general rule this will massively increase performance when you’re outputting objects, as it saves Grasshopper having to cast and create them for you.
  2. Grasshopper doesn’t natively implement the rhino PointCloud class, so I created one with Tarsier to avoid exactly this (creation of hundreds of thousands of points every frame). In proper Grasshopper fashion, it previews as a Rhino point cloud, has a custom preview(thickness), has bake/set, deconstruct/reconstruct, and dynamic casting to RhinoCommon’s point cloud class, (or other implementations) so it can be interoperable. The code for this is here

From memory, with the two above considerations loading a full frame was around 10-20ms on a mediocre PC. In your case, I expect you’ll see an enormous performance gain just by using the single line of Linq I mentioned in (1).

(2) is primarily aimed at addressing the draw time in Rhino, as a point cloud renders significantly faster than 100,000 individual points. And, since Grasshopper also runs on Rhino’s main thread, this affects the update speed.

A simple benchmark of these this technique (1) using 100,000 randomly generated points.

Awesome, so we’ve gone from 565 ms to 12 ms. 4700% faster!

Now, unfortunately because we’re drawing all those 100,000 individual points, just one of those components enabled will show some pretty poor display performance!
Time to regen viewport 100 times = 5.41 seconds. (18.50 FPS)

So to address (2) - let’s output a point cloud and write a display function for it. Another advantage of this is that you can include the Kinect color directly into the point cloud rather than managing two different lists. Constructing and outputting the point cloud results in similar (slightly better) to the array method. In this case I’ve written the renderer as a separate component, but you’d be just as easily able to embed it in the receiver, too.

Since the renderer doesn’t do any computing in RunScript it doesn’t show a computation time.
But, when we test viewport speed again with !_TestMaxSpeed, we see:
Time to regen viewport 100 times = 0.83 seconds. (120.77 FPS), or about 600% rendering performance.

I’ve attached the example demonstrating this.

pointCloud_performance.gh (8.2 KB)

Cheers
Cam

7 Likes

Cameron,
all this is awesome! While doing some initial research on what similar components are out there, I remember looking into Tarsier, but then it fell off the radar and I went with Project_Owl.

The ultimate goal for this component is to create an augmented reality sandbox in the Rhino/GH environment. The point cloud part has to be fast and shouldn’t introduce any overhead, especially given that the focus lies on the subsequently created mesh and various types of analysis which can be run on it.

One thing I’ve been trying to address as well is the inherent noise in the depth map of the Kinect sensor. So far I’m using the built-in Rhinocommon method, which is fairly slow. I can see you have implemented a promising smoothing algorithm on the point cloud itself. I’ve been toying with applying Gaussian blurring and will compare the results, to what can be achieved with your algorithm.

Here is a very early proof-of-concept of where I am now:

Any suggestions on how to better address the workflow are more than welcome

1 Like

Ah, very nice. In that case I’d suggest that you do the mesh creation in one component rather than separating it out. You should be able to go straight from point cloud data into a colored quad mesh which will save on some processing overhead.

I like smoothing the ‘raw’ kinect data rather than smoothing a mesh, because it’s better at eliminating noise that might appear for one frame, rather than just smoothing it out and causing muted spikes. The downside being that you get a small delay.

That said, if you do want to go with smoothing the mesh, you should have a look at what I do in Chromodoris (source code). Essential make a map of the required topology once, then use multi-threading to smooth all the vertices ‘at once’ (not really, but close enough) and repeat for x iterations.
That was about as fast as I could make a crude mesh smoothing algorithm, and benchmarked a bit higher than any others I could find (weaverbird etc.)

1 Like

Thanks Cameron, I’m learning a lot from your code. Really like how you leverage the ConnectedTopologyVertices Method for your mesh smoothing. Having said that, I’m getting good results with Gaussian blurring the point cloud itself and will probably stick to this.

That’s a very interesting idea. How would you approach this? I’m currently triangulating the points, sending these to GH as a mesh and then deconstructing into faces to colorize these. Being able to spit out a colored quad mesh directly would be a much better strategy.

1 Like

So because we know that we’re getting an ordered list of coordinates from the Kinect, we can create the quad faces directly from the indices. For example, with the topology:

0---1---2---3---4---5---6
|   |   |   |   |   |   |
7---8---9---10--11--12--13
|   |   |   |   |   |   |
14--15--16--17--18--19--20

We could directly do:

AddVertices(<all vertices>)
AddFace(0,1,8,7);
AddFace(1,2,9,8);
AddFace(2,3,10,9);
etc...

And hence knowing the stride (x-resolution) we can come up with the function:

  /// <summary>
  /// Creates a quad mesh from an ordered grid of points
  /// Diagram of structure:
  ///
  ///   2 -- 3
  ///   |    |
  ///   1 -- 0
  ///
  ///   x-1 , y-1  --  x   , y-1
  ///
  ///       |            |
  ///       |            |
  ///       |            |
  ///
  ///   x-1 , y    --  x   , y
  /// </summary>

  Mesh CreateQuadMesh(IEnumerable<Point3d> pts, int stride) {
    int xd = stride;       // The x-dimension of the data
    int yd = pts.Count() / stride;       // They y-dimension of the data

    Mesh m = new Mesh();
    m.Vertices.Capacity = pts.Count();      // Don't resize array
    m.Vertices.UseDoublePrecisionVertices = false;       // Save memory
    m.Vertices.AddVertices(pts);       // Add all points to vertex list

    for (int y = 1;y < yd; y++)       // Iterate over y dimension
    {
      for (int x = 1;x < xd; x++)       // Iterate over x dimension
      {
        // To get the array index of the item at x,y, we use y * xd + x

        m.Faces.AddFace(y * xd + x, y * xd + x - 1, (y - 1) * xd + x - 1, (y - 1) * xd + x);
      }
    }
    return m;
  }

In practice there’d be some further optimization you could make by creating the mesh directly from the frames rather than creating additional Point3d list first, and you could also go directly to Point3f to avoid some garbage collection and memory overhead (since a 640x480 Point3d array should be 640x480x3x4 bytes = 3.69kB per frame which we don’t need floating around).

And subsequently, because we then know the topology of the mesh ( due to knowing the stride ) the smoothing operation can become faster again, as we don’t need to calculate and cache the mesh topology in the beginning.

3 Likes

Cameron, this is gold to me!
Approaching it this way, I can also do my elevation banding in the same go. Since I am iterating over the points to check whether they fall within a user-defined trimming rectangle anyway, I could use the loop to color each vertex based on its respective elevation and store in an array, like this:

System.Drawing.Color[ ] colors = new System.Drawing.Color[pts.Count()];
colors[i] = Color(z);

Where Color(); defines the logic of how a color gradient is assigned. Later I would use this array to colorize the mesh:

m.Vertices.AddVertices(pts);
m.VertexColors.SetColors(colors);

This should save some precious ms as compared to building the mesh, sending to GH, decomposing, colorizing and reconstructing again…

edit: here is the relevant method

1 Like

Yep, that’s spot on. Probably worth doing a small performance comparison to figure out whether setting each color as part of the loop using Add versus buffering to an array and usings SetColors. In the same (very quick) test for the AddFace versus AddFaces with an array buffer I found incrementally adding them to be slightly faster (or the same, but with cleaner code).

Also a side note would be that while you probably can’t modify the mesh in a multi-threaded way, writing to an array can be multi-threaded. Hence if you end up doing any math that applies per vertex (i.e. if for some reason your color calculation was taxing) you could multithread these, write to array indices then apply to the mesh.

Cheers

2 Likes

Thanks again. I will do some testing and let you know which approach yields better results.
I’m using Parallel.For for parts of the routine already, so it makes sense to try and multithread this bit as well.

Got a Kinect, too. I will try your approach and let you know about the reults and if possible, about the optimizations.

Andi, I’m happy to share my code if you were interested. Let me know if you could find any use for it.

1 Like

Hi @mrhe, I would be highly interested in your code. Could you please share the code with me?

Hi @haraaald45, sure thing. The whole project is open source on GitHub, here is the specific method, that takes care of the meshing:

You can read more about unsafe access to mesh vertices in this topic:

Hi @mrhe, thank you very much for the source code. However, I think this code might be not the right fit for my case, since the function CreateQuadMesh() is not considering holes in the point cloud… :frowning_face:
image

image

You’re right, the above method operates on Kinect’s depth frames, where every pixel has a value (even if it’s -9999). As a result, it creates a 2.5D mesh -> for every X,Y coordinate, there is only one Z value.