Suggest this gets split into a different topic…
Interesting discussion. In 1996 I started doing reverse engineering while working for a quality engineering group at Boeing. Using a Hymark laser camera attached to an LK CMM, our part size was at first limited to what fit on the CMM’s granite bed. We quickly figured out that by super gluing a lot of tooling balls to the parts, we could scan parts that were longer than the CMM bed. Our method of sticking multiple scans together was fairly simple. We extracted the centers of the tooling balls and used this point cloud to stich together the scans.
To clean up the scans we used a simple process. Since we had a limited time to do our scans with almost no ability to redo the scanning, we collected hundreds of thousands of points for each scan. After stiching these files together we would pull the combined point cloud up on the screen, the result looking like a white blob. We would then start decimating this blob of points until it looked like the original part. We then would undo the last decimation. This gave us really good results. Software to automate this work either didn’t exist or was too crude to take seriously. These were great projects to work on.
I’m not sure what you mean by “alignment algos” but you seem to sometimes use the term to indicate the creation of NURBS surfaces atop a mesh, as implied above. If that’s the case then I think you substantially underestimate the task of RE and the immense complexity involved at the mathematical behind-the-scenes level. At its core, RE involves more than “just” making a series of NURBS patches “fit” (“align” in your terminology?) to a cloud of points or mesh vertices. But even using that simplistic process as the goal, the task will never be push-button automatic.
RE involves making decisions that a computer cannot make. A scanned data set is just a cloud of points. A mesh gives a positional network connecting these points that provides proximity order to the cloud, but it’s still just a data set; it contains no information about direction, axes, symmetry, or more esoteric information such as what kind of curves were used to generate the shape being scanned. And every point in the data set has an uncertainty associated with it that permits the RE surfaces to miss the exact location described by the numerical values recorded for that point while still declaring the RE surfaces to “exactly” fit the data. Nevermind that none of the points are aligned in anything resembling planar “slices” which could be evaluated as curve-fits vs. the surface-fits so far discussed, or that any such slices might be aligned with a principal axis of the object being modeled.
So lets simplify the discussion and only investigate the finer points of curve-fitting. There are literally hundreds of different curve-fitting algorithms of which only a small subset are implemented in the kernels underlying most CAD programs, and those algorithms are designed around the mathematical representations of curves employed internally by the CAD program. Many curves historically used in design have no exact representation in CAD, and others have only exact representations when explicitly designated. For example, it is widely known that the P-51 Mustang was designed using conic sections as the basic curve type. You might mistakenly try to model a Mustang using degree-two NURBS curves, even constraining yourself to only using single-span curves, but you’d be quickly frustrated because moving the middle control points to get the curve to “bulge” as desired would destroy the required tangency directions at the ends of the curve; the conic section employed by Liming was always inscribed within a right angle and the “bulge” was controlled by the weight assigned to the middle control point. So the “method of conics” used to define the shape of the Mustang as it has been called for eighty years is a misnomer, it’s actually a “method of constrained degree-two rational Bézier curves,” and the RE specialist needs to know this prior to applying his talents—no “automatic” RE “alignment algorithm” can bring this kind of knowledge to bear on the task.
Likewise, items designed prior to WWII were typically designed using circular arcs, elliptical arcs, and logarithmic spiral curves provided by any of various “French curves” at the draughtsman’s table, then approximated on the shop floor by a spline interpolating points given from tables of offsets measured from that draughtsman’s drawing. Other designs employ polar functions and even trigonometric functions to generate their curves. There is no way to exactly match a trigonometric or logarithmic curve in CAD; the Bernstein polynomial-based math used in CAD simply is incapable of it, but good approximations are possible if one understands the CV placement rules required—again, a level of knowledge that the RE artist must bring to the task and a decision that cannot be encoded in an “automatic algorithm” (although a few of the several algorithms to constrain NURBS curves to e.g. monotonic curvature and logarithmic curve approximation would be welcome additions to the Rhino toolset). So much for curve fitting; surface fitting only compounds the problem by orders of magnitude especially when considering that guide curves on the orthogonal axes of the original might have been of very different curve types.
My point here is that you seem to underestimate the complexity of the task when you dismiss those who build software for not providing the tools to do modeling in meshing software or to do mesh manipulation in modeling software. Meshes are made from raw data points and should never have vertices adjusted (although eliminating outliers, etc., are certainly required tasks); NURBS patches are fit to that raw data in modeling software by skilled technicians employing knowledge of the design principles used to generate the original artifact, its principal axes and planes of symmetry, etc., coupled with an understanding of the tolerances to be applied to the raw data. Perhaps someday AI will replace such technicians but, until then, you are the expert the client relies on to bring that talent. Software will never be able to provide “alignment algos” that can anticipate the genesis of the artifact being reverse engineered and semi-automatically or automatically fit NURBS (or any other!) surfaces to meshes measured from it.

I’m not sure what you mean by “alignment algos” but you seem to sometimes use the term to indicate the creation of NURBS surfaces atop a mesh
No, the implication above was that ‘the creation of nurbs atop a mesh’ is a reverse engineering ideology, and the ‘alignment of point cloud and mesh data scans etc’ is also a reverse engineering ideology.
Therefore those two ideologies are not mutually exclusive – in terms of their existence. Hence, having one without the other is super silly – imo.
It makes no sense to say, ‘a software with the ability to streamline the second half of the reverse engineering process, isn’t a reverse engineering program and therefore shouldn’t have the ability to align clouds and meshes.’
There’s no point in having tools to convert mesh to nurbs, without first having tools to align clouds n’ meshes and vice versa – that’s been my point.

If that’s the case then I think you substantially underestimate the task of RE and the immense complexity involved at the mathematical behind-the-scenes level. At its core, RE involves more than “just” making a series of NURBS patches “fit” (“align” in your terminology?) to a cloud of points or mesh vertices.
I’m thoroughly aware.

even using that simplistic process as the goal, the task will never be push-button automatic.
That used to be more true about 15yrs ago, but it’s becoming evermore automatic as we go forward.
I never liked the automation in this sense, in the past. But today in R7 and R8 you can do quad remeshing which is basically a more advanced version of this automatic ability, than I’d ever seen in the past – even in RE programs. Which is another reason why I say Rhino has become an RE program, and therefore should have the ability to align clouds and meshes.

RE involves making decisions that a computer cannot make.
True, but RE involves more of the computer algos making more decisions than the human does – imo.
I would say the the human needs to intervene and guide the algos in the ‘accurate’ direction, cause mostly what I’ve been saying is the human can obtain higher quality data when the algos aren’t aimlessly aligning the data all the time.
Instead, it should be a controlled process that is done in increments – which is why I’ve been trying to emphasize the ‘alignment’ ideology I’ve been speeking of in multiple threads. Because the world seems to think the algos do it automatically on their own with zero intervention by the human.
This is sad, because it leads to messy data. Maybe someday if AI actually becomes AI, then this might not be the case. But until then, imo, we need to stop thinking that the ‘alignment’ stage is automatic.
It’s a paradox, because I’m not saying that the alignment should be 100% manually done by the user.
I’m saying the user should have the oportunity to systematically access the alignment algos in order to align data in a controlled manner – without assuming the computer just handles it all 100% in realtime.

A scanned data set is just a cloud of points. A mesh gives a positional network connecting these points that provides proximity order to the cloud, but it’s still just a data set; it contains no information about direction, axes, symmetry, or more esoteric information such as what kind of curves were used to generate the shape being scanned. And every point in the data set has an uncertainty associated with it that permits the RE surfaces to miss the exact location described by the numerical values recorded for that point while still declaring the RE surfaces to “exactly” fit the data. Nevermind that none of the points are aligned in anything resembling planar “slices” which could be evaluated as curve-fits vs. the surface-fits so far discussed, or that any such slices might be aligned with a principal axis of the object being modeled.
I disagree. The data is certainly capable of having normal vector and world axes information, texture etc.

So lets simplify the discussion and only investigate the finer points of curve-fitting.
I’m not even referring to ‘curve-fitting’. Unless you’re referring to the ‘curvature’ of the geometry that needs to be aligned.

There are literally hundreds of different curve-fitting algorithms of which only a small subset are implemented in the kernels underlying most CAD programs, and those algorithms are designed around the mathematical representations of curves employed internally by the CAD program. Many curves historically used in design have no exact representation in CAD, and others have only exact representations when explicitly designated. For example, it is widely known that the P-51 Mustang was designed using conic sections as the basic curve type. You might mistakenly try to model a Mustang using degree-two NURBS curves, even constraining yourself to only using single-span curves, but you’d be quickly frustrated because moving the middle control points to get the curve to “bulge” as desired would destroy the required tangency directions at the ends of the curve; the conic section employed by Liming was always inscribed within a right angle and the “bulge” was controlled by the weight assigned to the middle control point. So the “method of conics” used to define the shape of the Mustang as it has been called for eighty years is a misnomer, it’s actually a “method of constrained degree-two rational Bézier curves,” and the RE specialist needs to know this prior to applying his talents—no “automatic” RE “alignment algorithm” can bring this kind of knowledge to bear on the task.
I don’t think we’re referring to the same thing in terms of the nomenclature of ‘alignment’.

Likewise, items designed prior to WWII were typically designed using circular arcs, elliptical arcs, and logarithmic spiral curves provided by any of various “French curves” at the draughtsman’s table, then approximated on the shop floor by a spline interpolating points given from tables of offsets measured from that draughtsman’s drawing. Other designs employ polar functions and even trigonometric functions to generate their curves. There is no way to exactly match a trigonometric or logarithmic curve in CAD; the Bernstein polynomial-based math used in CAD simply is incapable of it, but good approximations are possible if one understands the CV placement rules required—again, a level of knowledge that the RE artist must bring to the task and a decision that cannot be encoded in an “automatic algorithm” (although a few of the several algorithms to constrain NURBS curves to e.g. monotonic curvature and logarithmic curve approximation would be welcome additions to the Rhino toolset). So much for curve fitting; surface fitting only compounds the problem by orders of magnitude especially when considering that guide curves on the orthogonal axes of the original might have been of very different curve types.
This can certainly be needed for discusion later in the workflow, but the ‘alignment’ I’m referring to is back at the pointcloud alignment and/or mesh alignment stage – before the mesh to nurbs stage.

My point here is that you seem to underestimate the complexity of the task when you dismiss those who build software for not providing the tools to do modeling in meshing software or to do mesh manipulation in modeling software.
Well, I’m just trying to get them there – per say. I’m being, idk what the word is, I’m being somewhat ‘insouciant’ towards the complexity of the algos needed, maybe.

Meshes are made from raw data points and should never have vertices adjusted (although eliminating outliers, etc., are certainly required tasks)
I’m not saying they should be ‘adjusted’ – per say. I’d actually prefer for the vertices NOT to be moved.
I might ‘decimate’ a mesh, but never do I really manipulate the position of vertices, unless I’m ‘remeshing’ or something.
Maybe I’ll have to provide videos to demonstrate my trade secret workflows in this regard, which I’ve done and am willing to do, especially due to what NextEngine has done to my future.

NURBS patches are fit to that raw data in modeling software by skilled technicians employing knowledge of the design principles used to generate the original artifact, its principal axes and planes of symmetry, etc., coupled with an understanding of the tolerances to be applied to the raw data.
I’m thoroughly aware, while I’m mostly referring to the stages in the sequence of the workflow that are prior to those nurbs stages. I’m confident that Rhino already has that stage handled since R7.
Even, a skilled CAD operator could handle it in versions prior to R7, as you’ve touched on relative to mentions of ‘technical skill’.

Perhaps someday AI will replace such technicians but, until then, you are the expert the client relies on to bring that talent.
Indeed. And I’m still skeptical.

Software will never be able to provide “alignment algos” that can anticipate the genesis of the artifact being reverse engineered and semi-automatically or automatically fit NURBS (or any other!) surfaces to meshes measured from it.
This is almost what I’ve been saying, but I’m mostly referring to the odd belief that the industry is heading whereby they believe the user doesn’t need to intervene and guide the computer to align the data scans correctly.
The industry is attempting to eliminate the users ability to interact with the scan alignment process, which would be fine if the software wasn’t so bad at it. And that’s my point I’ve been trying to explicate.
I’m not even really talking about the mesh to nurbs stage. Although, at some point I would.
Theoretically, there might be advantages to converting 3d scanned data into nurbs, prior to alignment. But I wouldn’t do it that way, because I believe it poses a moment of error deviation. And that is why I’d only do that later down the road – after the mesh is thoroughly aligned, fused, etc.
I’d imagine the nurbs would pose an infinite calculation risk as well – hence all the ‘render mesh’ things n’ such.

Maybe I’ll have to provide videos to demonstrate my trade secret workflows in this regard
Yes, please do.