I’ll try working on networking these ‘seed’ srfs soon.
I disagree. That’s why the scans should be done properly. While reverse engineering, even bad scans, is still important for establishing a principally accurate starting point.
Yes, agreed. But the design intent should be the gauge used for these decisions.
At least the foundation (starting point) should always conform to the original data as much as possible, no matter how bad the data is.
From that point, you can do whatever the design intent entails. I suppose if the design intent means, ignore 25% of the scan data, then that’s another story.
If Rhino/GH had some sort of RE/GUI ‘eto framework’ that had options for choosing a sort of ‘conformation’ tolerance for calculating the ‘fitment’ to said data, then perhaps the user could choose something like 3%-25% tolerance conformation to said scanned data, etc.
And yes, obviously if the given ‘scanned data’ was so sloppy that it had gaps approx 0.025mm-0.13mm in some places then perhaps this condones the use of a relatively medium to large tolerance.
And yes, if the scanned data is ‘degree 1’ and hence faceted everywhere, then surely the conversion into NURBS degree 3 or so, shall condone some sort of smoothing tolerance as well…
But my RE technique is to take things one step at a time, especially when the design intent isn’t well defined; you never know what might be overlooked within any particular stage of the entire sequence of events throughout the workflow from scan-to-production and out in the field during use by the customer, etc.