my basic idea is having two meshes which are almost the same and they should be aligned so there are overlapping. One of the meshes is the floating one the other on the target and in the end, I would like to get a Transform Matrix of the floating mesh.
My first question is there already something I can achieve it with in Grasshopper? If there is nothing my idea would be to use the open3d library (python) or an implementation of Integrative Closest Point in C#. Are there any hints how I can use the open3d library in grasshopper or es the C# approach more recommended due a better performance?
Other things are also available (for instance mesh connectivity etc etc ) …
… so why bother with libraries and similar freaky things?
Other than that this is easily solvable … but requires some explanations more: what exactly you want to do? Provide them like talking to an idiot => (1) this is a valid Mesh (where mesh.Vertices. CombineIdentical(true, true) yields a valid result) , (2) has naked/clothed vertices, (3) this is another valid Mesh (where …) , (4) has also naked/clothed vertices, (5) using (2+4) I want to do … blah, blah,
M1 are M2 are the same? Meaning: same V,E,F PLUS identical connectivity trees PLUS identical Indices in the related connectivity trees. But … I hardly can imagine this happening (due to the nature of the ball pivot algo used on the LIDAR pts … not to mention the scan itself).
If not that’s not an easy walk to the mild side of things: get a zillion vertices from this, a zillion vertices from that and start doing comparisons (that may end the next day/week/month/year/decade).
Since the 2 LIDAR sets (the meshes, that is) differ with an unpredictable way … what we have here is a pattern recognition task and not a point to point match.
Since there’s an assurance that the 2 scans are related with the same target … this could narrow the time required: find some characteristic pattern pair “portions” (or just one pair) that can safely been used for finding the trans matrix. Call Samuel (CRAY) for bargain prices.
the basic idea is to set objects in a previously scanned room and later during runtime they are placed in the right position give by the translation matrix which is created by the mesh/ point cloud alignment.