Align two meshes/ point clouds

Hi together,

my basic idea is having two meshes which are almost the same and they should be aligned so there are overlapping. One of the meshes is the floating one the other on the target and in the end, I would like to get a Transform Matrix of the floating mesh.

My first question is there already something I can achieve it with in Grasshopper? If there is nothing my idea would be to use the open3d library (python) or an implementation of Integrative Closest Point in C#. Are there any hints how I can use the open3d library in grasshopper or es the C# approach more recommended due a better performance?

I appreciate any kind thoughts.

1 Like

Closest point is just one line:

Point3dList class: Point3d closest = Point3dList.ClosestPointInList(pList, testPt);

Other things are also available (for instance mesh connectivity etc etc ) …

… so why bother with libraries and similar freaky things?

Other than that this is easily solvable … but requires some explanations more: what exactly you want to do? Provide them like talking to an idiot => (1) this is a valid Mesh (where mesh.Vertices. CombineIdentical(true, true) yields a valid result) , (2) has naked/clothed vertices, (3) this is another valid Mesh (where …) , (4) has also naked/clothed vertices, (5) using (2+4) I want to do … blah, blah,

Thank you Peter for your thoughts.
Here a more detailed explanation (hope it will help):

(1) is a mesh that is located close to (0/0/0) and represents a scan of a room (spatial Mapping provided by the Hololens)

(2) is also a room scan of the same room however it orientation differs from (1) and slightly differs in its geometry

the mesh (2) should be placed exactly like the mesh (1) and give the associated Transform matrix

M1 are M2 are the same? Meaning: same V,E,F PLUS identical connectivity trees PLUS identical Indices in the related connectivity trees. But … I hardly can imagine this happening (due to the nature of the ball pivot algo used on the LIDAR pts … not to mention the scan itself).

If not that’s not an easy walk to the mild side of things: get a zillion vertices from this, a zillion vertices from that and start doing comparisons (that may end the next day/week/month/year/decade).

https://hal-mines-paristech.archives-ouvertes.fr/hal-01097361/document

And why are you after freaky reverse engineering stuff?

I think you meant ‘iterative’?

https://en.m.wikipedia.org/wiki/Iterative_closest_point

Open3d Looks pretty good, but beware rhino uses ironpython so you can’t use import open3d
Might be best to find a C# implementation. I found this but it looks old https://github.com/braddodson/pointmatcher.net/blob/master/readme.md

Can https://www.food4rhino.com/app/volvox do this @MateuszZwierzycki ?

Since the 2 LIDAR sets (the meshes, that is) differ with an unpredictable way … what we have here is a pattern recognition task and not a point to point match.

Since there’s an assurance that the 2 scans are related with the same target … this could narrow the time required: find some characteristic pattern pair “portions” (or just one pair) that can safely been used for finding the trans matrix. Call Samuel (CRAY) for bargain prices.

Yes, I meant interative.

1 Like

the basic idea is to set objects in a previously scanned room and later during runtime they are placed in the right position give by the translation matrix which is created by the mesh/ point cloud alignment.

align 3D scans :tipping_hand_man:t4:

lol’s

The singularity is near.

Only a top C# Guru (of Gurus (of Gurus (of …))) can solve that problem.

1 Like