It contains functionality for the creation and modification of NURBS geometry and meshes like:
Modeling of NURBS control vertices
Smoothing of Curves, Surfaces, and Mesh areas
Surface blend creation
Creation of NURBS surfaces on selected Mesh areas
Focus is easy and intuitive usability. So, all modeling commands offer a local Undo/Redo allowing easy comparison of different work stages.
There is also integrated analysis functionality that helps to judge the current geometryâs quality. In addition, analysis commands are available like:
Sections through geometry
Curvature analysis of Curves and Surfaces
Graphical deviation between objects (e.g., deviation display between surface to a mesh)
Analysis of transition quality between matched curves or surfaces The analyses follow associatively all geometry modifications and will be dynamically updated during the modeling process.
To be fair - youâve always been able to do this with stock Rhino. PointDeviation allows you to select any mesh or point cloud, and then select your over/under values and gives you a color coded map showing which parts of your NURBS model are in/out of tolerance to your reference mesh.
Iâve been trying to find (still need to keep looking) examples of this. Iâm not sure I am 100% satisfied with that particular tool, but Iâll keep checking it out.
You can get all sorts of fancy color gradients, but really all I care about it whether the surface is within a given tolerance. So I effectively set my âGood Pointâ and my âBad Pointâ to the same value. Technically youâll see my Good Point is ever so slightly less, since RH7 will not let you set them to the same value. (Curiously, older versions allowed you to do this) The âIgnoreâ point should be set to some multiple of Good/Bad - it will not return any value in areas that are beyond that value - this his helpful for areas where you want to ignore the data, where the clear intent is to omit a feature. I find the âDisplay Hairâ to be distracting and donât use it, but it will give you a graphical representation of which side of the reference data your surface is on - so in some cases this may be useful, but typically you can justâŚlook and see what side itâs on. If you point edit your surfaces, the deviation map will update as well, so thereâs an interactivity to it that is helpful. Best as I can tell, thereâs no additional functionality that youâll get with the Cyberstrak tool, and I say this as someone who is very excited about Cyberstrak.
Also worth noting - the statistics at the bottom are very nice when it comes to validating a NURBS model vs. the scanned data to a client.
The vast majority of your model is beyond your ignore point, thatâs why it looks the way it does. Might also want to turn off the âHairâ as I said above. Also - your Good Point setting should be informed by the quality/accuracy of your laser scanned data. That data looksâŚrough.
K, so even after doing adjustments, it still shows that this so called pointdeviation tool is lacking the ability to illustrate the deviation intuitively with a smooth gradient.
It appears that the underlying polysurface is having too much effect on the sparse depiction of the deviation.
maybe the term I should use is a âdecimatedâ depiction. even though sparse still applies. you can see the polysrf is determining the densities, which should basically not be how this is done.
the polysrf point density should be irrelevant imo.
maybe if i could select the mesh first rather than the poly, but not sure that would change the result.
I would probably have to create a special poly that has homogenous density to make it work better.
And to add one more thing - from a workflow perspective, what Iâve found to work well and efficiently is to first get your NURBS data to visually match your stl reference. Iâll often assign different color materials to the stl data and my NURBS data. Once they start âz fightingâ visually, then itâs appropriate to bring up PointDeviation and start looking for areas that need further refinement to hit tolerance. Either VSRâs Control Point Modeling, or Cybertrakâs CV Modeling tools are invaluable to hitting tolerance. You can run PointDeviation, and then edit your surface with CV Modeling. When you hit âOKâ and CV Modeling pops out the revised surface, PointDeviation will update and reflect the new deviation map.
Youâve skipped the first step - your geometry is so far away from your reference that youâre not getting any kind of useful information.
The Polysrf density has no bearing whatsoever on the visual result. It will return a color value based on each point of your mesh, not the density of your polysurface. The density of the color map is driven solely by the density of your stl mesh. If youâre not seeing a result for any given vertex of your stl, that point is beyond your Ignore point.
Start with something like
Ignore = 10.
Bad = 2.
Good = 0.2
and see what it looks like.
Then revise tighten the settings based on what you see.
If you are using the mesh as the source of points, and the polysurface as the object to be tested then the polysurface does not affect what is displayed other than the amount of deviation.
That is how Ignore works. Any points greater than the Ignore distance from the surface are ignored (not colored).
Hereâs what z fighting looks like, this is how you know your surface is close to your mesh:
Hereâs how a color map gradient looks with appropriate settings - Iâm showing this at a tighter tolerance than the customer needed, so you can see the areas that are out of tolerance, with a gradient:
Your Good/Bad/Ignore settings are disconnected from the reality of both the quality of your laser scanned data, and the relationship of your NURBS model to that data. Thatâs why youâre not getting useful information.
Hereâs a good example of how setting a useful Ignore value can be helpful. Iâve untrimmed the base surface where it meets the pylon. However, I donât want to compare this surface to the pylon data, thatâs not relevant:
A properly set Ignore value will exclude the pylon data from my color map, which just makes it visually easier to understand the deviation between the surface and the area Iâm actually trying to model. Setting it incorrectly looks like this:
I havenât been able to get the sequence to work with the mesh first then reference the polyâŚ
hmm I guess that makes sense, if I understand it as âignore the points greater that the ignore distanceâ
Thatâs a fun way of putting it. Iâve always liked that characteristic of Rhino. âZ fighting!â
In the case of the project Iâm currently using, Iâd agree it seems like itâs not a very accurate mesh, but youâd have to understand the nature of it, and itâs kinda a trade secret object sort of so Iâm not able to upload atm or disclose a bunch about it, but basically itâs a worn out tooling thingamajig and yes the mesh deviates alot from the new current polysrf I machined last year when I wanted to do a deviation comparison but couldnât get Rhino ta do it
So, Iâm playing around with it cause of the new developments with Rhino and Cyberstrak
But I mightâve took Bobâs thread on a tangent
Thatâs cool
I wasnât able to get mine to work when I select the mesh before the polysrf⌠did you select mesh first?
hmmm idk I still kinda expect it to work better than itâs been, but maybe Iâm still not using it correctly
I can see yours is definitely using the mesh though, and mine is using the poly, sooo I just need to figure out how to flip it around
Thatâs really cool too! Itâs like another reason Rhino is showing even more signs of it being a reverse engineering tool
Well - to put it back on topic - For this type of work, you should absolutely approach it by making your primary surfaces as âlightweightâ as possible, and then use the Cyberstrak CV Modeling tool to refine them from there to hit tolerance. Itâs a very powerful and repeatable workflow, once you get the hang of it.
And yes - you first select your mesh - the reference you are testing against - and then select the surface you want to analyze.
oh snap! I think I was just not letting it calculate before or something I think I got it to work
hmm the points look like theyâre on the mesh though and not the polysrf⌠I guess itâs better than nothing. Iâll play with the settings now and see how it goes.
The problem Iâm having now, besides the object the points are / arenât applied to per say, is that the transparent object still is culling the visibility of the points
I thought this topic is about the Cyberstrak plug-in ?
maybe someone from mcneel @Gijs can split this topic / above posts to something like
PointDeviation best practice
?
It would be cool to see someone demonstrate this, cause pointdeviation just doesnât seem to fulfill the job imo. With the transparency bug aside, who knows, maybe Cyberstrak will have the same problem.
That is how PointDeviation works. It calculates and shows the deviation of the points from the target curve, surface, polysurface, extrusion or mesh.
You can first extract the vertices from the mesh first using ExtractPt.
I recommend using ExtractPt with Output=Pointcloud.
Turnoff the layer with mesh or hide the mesh.
PointDeviation with the pointcloud as the points, and the polysurface as the target.d