Cyberstrak Modeling - New Plugin for Rhino 7/8:

Cyberstrak Modeling is a new plugin for Rhinoceros.

It contains functionality for the creation and modification of NURBS geometry and meshes like:

  • Modeling of NURBS control vertices
  • Smoothing of Curves, Surfaces, and Mesh areas
  • Surface blend creation
  • Creation of NURBS surfaces on selected Mesh areas

Focus is easy and intuitive usability. So, all modeling commands offer a local Undo/Redo allowing easy comparison of different work stages.

There is also integrated analysis functionality that helps to judge the current geometry’s quality. In addition, analysis commands are available like:

  • Sections through geometry
  • Curvature analysis of Curves and Surfaces
  • Graphical deviation between objects (e.g., deviation display between surface to a mesh)
  • Analysis of transition quality between matched curves or surfaces The analyses follow associatively all geometry modifications and will be dynamically updated during the modeling process.

More info at www.cyberstrak.com and on food4Rhino…


Posted Feb 07, 2024 by Carlos Perez on Rhino News, etc.

1 Like

wait what :astonished: :money_mouth_face: This is awesome, I was looking for this the other day :smiley:

I’ll be able to do this analysis now I think :star_struck:

where’s the buy-now button :smiley:

dude this is awesome! he’s added ‘reverse engineering’ stuff too! :exploding_head:

https://www.cyberstrak.com/en/shop#!/Cyberstrak-Modeling-Reverse-Engineering-PlugIn-for-Rhino-7-8/p/601883555/category=0

To be fair - you’ve always been able to do this with stock Rhino. PointDeviation allows you to select any mesh or point cloud, and then select your over/under values and gives you a color coded map showing which parts of your NURBS model are in/out of tolerance to your reference mesh.

2 Likes

I’ve been trying to find (still need to keep looking) examples of this. I’m not sure I am 100% satisfied with that particular tool, but I’ll keep checking it out.

It’s quite simple actually. I typically use it like this:

You can get all sorts of fancy color gradients, but really all I care about it whether the surface is within a given tolerance. So I effectively set my “Good Point” and my “Bad Point” to the same value. Technically you’ll see my Good Point is ever so slightly less, since RH7 will not let you set them to the same value. (Curiously, older versions allowed you to do this) The “Ignore” point should be set to some multiple of Good/Bad - it will not return any value in areas that are beyond that value - this his helpful for areas where you want to ignore the data, where the clear intent is to omit a feature. I find the “Display Hair” to be distracting and don’t use it, but it will give you a graphical representation of which side of the reference data your surface is on - so in some cases this may be useful, but typically you can just…look and see what side it’s on. If you point edit your surfaces, the deviation map will update as well, so there’s an interactivity to it that is helpful. Best as I can tell, there’s no additional functionality that you’ll get with the Cyberstrak tool, and I say this as someone who is very excited about Cyberstrak.

Also worth noting - the statistics at the bottom are very nice when it comes to validating a NURBS model vs. the scanned data to a client.

-Sky

2 Likes

K, buut why not have something similar to the GUI’s of “CurvatureAnalysis” or “DraftAngleAnalysis”?

A smooth color gradient per say…Or even crisp, depending on deviation etc.

The ‘pointdeviation’ characteristics seem too decimated or sparsely dense, and visually cluttered…

This command just doesn’t do it for me:

falling asleep trying to get this thing to “work”. :sleeping: getting more coffee bbl…

I’d rather like to see some kind of intuitive gradient instead of this sparse looking depicto-graphic effect:

The vast majority of your model is beyond your ignore point, that’s why it looks the way it does. Might also want to turn off the “Hair” as I said above. Also - your Good Point setting should be informed by the quality/accuracy of your laser scanned data. That data looks…rough.

1 Like

K, so even after doing adjustments, it still shows that this so called pointdeviation tool is lacking the ability to illustrate the deviation intuitively with a smooth gradient.

It appears that the underlying polysurface is having too much effect on the sparse depiction of the deviation.

maybe the term I should use is a ‘decimated’ depiction. even though sparse still applies. you can see the polysrf is determining the densities, which should basically not be how this is done.

the polysrf point density should be irrelevant imo.

maybe if i could select the mesh first rather than the poly, but not sure that would change the result.

I would probably have to create a special poly that has homogenous density to make it work better.

And to add one more thing - from a workflow perspective, what I’ve found to work well and efficiently is to first get your NURBS data to visually match your stl reference. I’ll often assign different color materials to the stl data and my NURBS data. Once they start “z fighting” visually, then it’s appropriate to bring up PointDeviation and start looking for areas that need further refinement to hit tolerance. Either VSR’s Control Point Modeling, or Cybertrak’s CV Modeling tools are invaluable to hitting tolerance. You can run PointDeviation, and then edit your surface with CV Modeling. When you hit “OK” and CV Modeling pops out the revised surface, PointDeviation will update and reflect the new deviation map.

You’ve skipped the first step - your geometry is so far away from your reference that you’re not getting any kind of useful information.

The Polysrf density has no bearing whatsoever on the visual result. It will return a color value based on each point of your mesh, not the density of your polysurface. The density of the color map is driven solely by the density of your stl mesh. If you’re not seeing a result for any given vertex of your stl, that point is beyond your Ignore point.

1 Like

This mesh geometry is basically a 14 yr old die that’s done probably over a million cycles, at least – hence the deviation.

That’s not how I’m interpreting it.

I’m seeing the opposite or inverse effect imo …

hmmmm :thinking: :coffee: I need a minute to think :thinking:

Seems like making the ignore point bigger has the opposite effect…

The bigger I make it the more points I see, the smaller I make it the less…

yeah idk this still confusing to me:

plus besides some wear and tear btw, this die has slightly been redesigned

but like I said, the polysrf appears to control the pointdeviation illustration density – imo.

Start with something like
Ignore = 10.
Bad = 2.
Good = 0.2
and see what it looks like.
Then revise tighten the settings based on what you see.

If you are using the mesh as the source of points, and the polysurface as the object to be tested then the polysurface does not affect what is displayed other than the amount of deviation.

That is how Ignore works. Any points greater than the Ignore distance from the surface are ignored (not colored).

1 Like

Here’s a 777 engine nacelle - I broke it up into radial quadrants to surface. You can see that the surface density is very low:

Here’s what z fighting looks like, this is how you know your surface is close to your mesh:

Here’s how a color map gradient looks with appropriate settings - I’m showing this at a tighter tolerance than the customer needed, so you can see the areas that are out of tolerance, with a gradient:

This tells me that if I wanted to refine my tolerance to something tighter, where I would need to adjust my surface.

You can see when I zoom in, each color point is associated with a vertex on the stl mesh:

Your Good/Bad/Ignore settings are disconnected from the reality of both the quality of your laser scanned data, and the relationship of your NURBS model to that data. That’s why you’re not getting useful information.

1 Like

Here’s a good example of how setting a useful Ignore value can be helpful. I’ve untrimmed the base surface where it meets the pylon. However, I don’t want to compare this surface to the pylon data, that’s not relevant:

A properly set Ignore value will exclude the pylon data from my color map, which just makes it visually easier to understand the deviation between the surface and the area I’m actually trying to model. Setting it incorrectly looks like this:

3 Likes

:thinking: I haven’t been able to get the sequence to work with the mesh first then reference the poly…

hmm I guess that makes sense, if I understand it as ‘ignore the points greater that the ignore distance’ :sweat_smile:

That’s a fun way of putting it. I’ve always liked that characteristic of Rhino. ‘Z fighting!’ :smiley:

In the case of the project I’m currently using, I’d agree it seems like it’s not a very accurate mesh, but you’d have to understand the nature of it, and it’s kinda a trade secret object sort of so I’m not able to upload atm or disclose a bunch about it, but basically it’s a worn out tooling thingamajig and yes the mesh deviates alot from the new current polysrf I machined last year when I wanted to do a deviation comparison but couldn’t get Rhino ta do it :sob: :sweat_smile:

So, I’m playing around with it cause of the new developments with Rhino and Cyberstrak :smiley:

But I might’ve took Bob’s thread on a tangent :innocent:

That’s cool :sunglasses:

I wasn’t able to get mine to work when I select the mesh before the polysrf… did you select mesh first?

hmmm idk I still kinda expect it to work better than it’s been, but maybe I’m still not using it correctly :sweat_smile:

I can see yours is definitely using the mesh though, and mine is using the poly, sooo I just need to figure out how to flip it around :sob:

That’s really cool too! :star_struck: It’s like another reason Rhino is showing even more signs of it being a reverse engineering tool :blush:

Well - to put it back on topic - For this type of work, you should absolutely approach it by making your primary surfaces as “lightweight” as possible, and then use the Cyberstrak CV Modeling tool to refine them from there to hit tolerance. It’s a very powerful and repeatable workflow, once you get the hang of it.

And yes - you first select your mesh - the reference you are testing against - and then select the surface you want to analyze.

1 Like

oh snap! I think I was just not letting it calculate before or something :astonished: I think I got it to work :sunglasses:

hmm the points look like they’re on the mesh though :thinking: and not the polysrf… I guess it’s better than nothing. I’ll play with the settings now and see how it goes.

The problem I’m having now, besides the object the points are / aren’t applied to per say, is that the transparent object still is culling the visibility of the points :expressionless:

I can flip it around and see them from the other side but then I can’t see through the mesh even though it’s supposed to be 50% transparent :sleepy:

1 Like

I thought this topic is about the Cyberstrak plug-in ?
maybe someone from mcneel @Gijs can split this topic / above posts to something like
PointDeviation best practice
?

2 Likes

Or pointdeviation wip / beta.

Cyberstrak is probably the way to go.

It would be cool to see someone demonstrate this, cause pointdeviation just doesn’t seem to fulfill the job imo. With the transparency bug aside, who knows, maybe Cyberstrak will have the same problem.

That is how PointDeviation works. It calculates and shows the deviation of the points from the target curve, surface, polysurface, extrusion or mesh.

You can first extract the vertices from the mesh first using ExtractPt.
I recommend using ExtractPt with Output=Pointcloud.
Turnoff the layer with mesh or hide the mesh.
PointDeviation with the pointcloud as the points, and the polysurface as the target.d

1 Like