Im trying to do some modeling based on scans. Basically i want to be able to model sections of 3d scans and then be able to use the section to boolean other objects so they fit perfectly. What is the best way to approach this? Is rhino the right tool? Is mesh2scan required for this? Also, Is there a way to start surfaces from individual vertices? Ive been trying to figure out how to do this for quite some time now and im not even sure im asking the right questions at this point
How does your scan look?
Have you tried quadremesh?
Ive asked this question other places and gotten the same first response. the answer is no i havent. but it looks like thats the go to workflow. Can you enlighten me a little on how its suposed to work? the modelling part i mean. is there a tutorial somewhere? even if i have to pay for it? id really like to be able to take scans of a cars center console or something and remodel the surfaces id like and then treat the new model like a solid. take it into fusion and fit mechanical models ive made into it. steer me in the right direction please?
So what youāre asking about is the most difficult, least-fun job in all of 3D. To do what you want you basically need to be a Rhino surfacing expert first, then you do your reverse engineering(or pay me to do it)by just building a quality model from scratch using the scan as a template, not actually directly using any of the data to create curves as itās not going to be fit for purpose at all
Quadremesh might get your 5-million-poly model down to aā¦multi-thousand surface model that will bring Fusion to its knees.
It depends on the result of your scan.
QuadRemesh is a command in Rhino that letās you convert a mesh into a polysurface.
thats exactly what i need to do. I understand the level of difficulty involved. But this is something i need to learn how to do even if it takes years. I need to know how the process works so i can figure out where to spend my time learning.
You just generally need to learn Rhino and how to make clean models of advanced things like, oh, automotive interiors. Thereās nothing to focus on you need to know everything, you need a solid base before getting in to the specifics of Reverse Engineering.
No. Thatās basically obsolete now imo.
This would be a much more affordable option: RESURF - RhinoResurf, unfold mesh, mesh to NURBS, point cloud to NURBS surface, mesh to solid
But thatās obsolete now also, imo.
Reverse engineering isnāt easy. While itās getting easier over time, especially with Rhino 7, and now Rhino 8. Thereās new tools that help this whole process. And new plugins under development that show good signs of better future.
Technically while this is true, I could pretty much break it all down to a clear concise workflow, which Iāve done from time to time.
Iāve even explained it to the iRhino developer dudes.
Most ppl donāt listen though.
Technically it is easy ā if you know all the secrets. Figuring those out, is the hard part.
I do appreciate that and i am in fact attempting to do just that. Every day im doing tutorials and modeling for at least 2 hrs a day. let me try to ask more specifically. where do i start when working with the scan as a template? Firstly it seems rhino users prefer to use pointclouds instead of meshes. and secondly. in fusion all you do is turn off history and turn on point snap and you can begin modeling surfaces directly on the scan vertices. Is there a similar workflow or a preferred workflow for template style modelling?
Thanks. and where do i start?
Modeling directly off the scan points is I would say a bad idea because they are NOT going to be in the ideal locations for a clean NURBS (or subdivision) model, and theyāre not actually accurate enough, doesnāt matter how they were captured. So itās like the scan doesnāt matter except as something to check against as you go.
I donāt.
For many objects, I try to export cleaned and closed meshes out of Artec Studio. In Rhino I either shrinkwrap scans and nurbs or I create sections to rebuild new surfaces.
How well are your scans aligned?
Prety well. My scan software has an alignment feature. probably within a few thou on each axis. +/-.01" at worst maybe
What scanner are you using?
eistar from einscan
Sounds youāve already started.
Iām curious what software, and scanner too
Hereās a link to a very valuable lesson I learned from @Trav about shrinkwrap.
Basically I used a fun project to learn the possibilities of reverse engineering entirely in Rhino, using Shrinkwrap and Quadremesh. Awesome tools mentioned by @martinsiegrist earlier.
The scanner I used was Scaniverse for iphone.
Although the project has a high tolerance for deviation, it was still quite fun being able to learn more about those Rhino tools in the process.
Thatās awesome. Iāve been wantting to try some of those technologies. Iāve had my eye on this page for a few years now:
Iām planning on buying one eventually
I have a project where I might need to be sending some devices out to customers in the field or something. I just wish the iRhino app was better is all. Minimal training curve is ideal.
I might need to create my own app and figure out apples lidar api
This is an example of one of those times I attempted to reveal all the secrets for the sake of getting Rhino into the realm of high precision reverse engineering capability:
Trimming away erroneous data is very important. This erroneous data is what causes alignment errors down the line. Most software Iāve seen and ideologies Iāve seen, will disregard this fact.
Youāll notice these errors build up over time, especially with the āauto alignā āauto decimateā āauto mergeā ideologies of most hand held scanning processes.
The scanner is pretty good for the money. Def better than an iphone, but not as good as a faroā¦ Haha. Ive been at this for a year now. Ive figured out a few things. Ive modelled off scans in fusion, but something kept tellei9ng me that wasnt the right software. I tried trial after trial of different software and the general consensus being that for the money rhino is the most powerful design software out there. So i trialed and bought it after seeing how well it handles subD modeling. Im thoroughly impressed. Im picking it up slowly. I havenāt quite figured out how to model outside of the constraints of each view. IE how to build a surface on a new plane that i have created on an angle that i have chosen. Which i thing is directly related to my disconnect on how to model in a 3d environment. Sure i can model a remote control or maybe a Bluetooth speaker but once i get my scan into rhino, where do i begin matching surfaces? i can make a shape just as if i have inserted a canvas, but whatās next? I imagine at some point i start pulling verts to line up with my scan right? how do i create my own sketch plane for when i want to do this on another angle?
Is your āscanā fully aligned and merged?
Iām assuming your scan is in mesh form and ready for conversion to āsurfacesā. So, at that point thereās of course many approaches you could take but the most enjoyable workflow imo is mentioned here:
and:
The approach I prefer is the upmost automatic as possible, though one doesnāt necessarily exist 100% yet.
As far as āsketch planeā or what not, Iām uncertain if you mean simply just the over all orientation of the object in the workspace or not.
Scanned objects do pretty much end up having a somewhat random orientation over all at some point. And yes this needs to be resolve by the user in some way.
I used to use a piece of software extensively that basically had some sort of ābounding boxā type approach that the user could use to find the basic outer most extents by which to square up the objects to the work space. While thereās several ways to do this, itās mostly up to the user to decide and track this particular transformation.
Hereās a cool image that illustrates a permutation of a workflow example from right to left:
I think itās a good example for whatās possible cause my mesh was so bad and had holes everywhere.
I better go get to work at my dayjob though, but this is one of my fav subjects bbl