Deviation Between Surfaces Types

I’m working on the best method to convert meshes to a solid body type. It seems SubD is my best bet after some testing.

I’d love to analyze the deviation between the source mesh object and the ToSubD object. Is this possible?

Why? For what? It’s not a simple “conversion” process, not fun, not something to do unless you really really need to.

Hello -if you convert ToSubD, you have options to either use the vertex locationsas SubD vertices, or interpolate them with the subD surface - the latter is probably closer in 3d to the mesh. None of this helps you measure, of course but may be useful anyway…

-Pascal

Definitely aware its not simple. CNC Machining with CAM in Fusion 360. Mesh is basically useless when you need to work with it in software that doesn’t primarily use it.

Yeah. I’ve got it pretty close while messing with the ToSubD settings but it’s pretty hard to tell “how close.” My final product has tolerances to hit (CNC Machining).

Hello - PointDeviation should work - it pretends to, here, but the SubD does not actually show the deviation - presumably a bug - however if you ToNurbs the subD you can get an answer.

RH-64432 PointDeviation: No display on SubD

-Pascal

2 Likes

Thanks, I’ll give this a try!

There are CAM solutions that can work with meshes…or you can go back to the process that made the original thing and redo it with actual appropriate geometry. Unless what you’re doing fits within some narrow parameters this is going to be an ugly waste of time.

I’m not finding your notes helpful. I’m not new to any of this.

I’ve used PointDeviation a BUNCH on converting meshes (laser scanned data in my case) to NURBS and can confirm it works great for this. If you have big and simple areas you want to surface you can even point edit your surface and get real time updates. I’ll typically set an under/over threshold, so the points are either blue for good or red for bad. Doing it with Subd’s definitely will add another layer to the process for sure, since as Pascal pointed out you’ll need to convert to NURBS before doing the analysis. If your geometry is pretty simple it may be that going the traditional NURBS route is faster than sub-d just because of this conversion.

1 Like

Thank you this is wonderful.

1 Like

I’m very skeptical about the claims here.

You familiar with doing all this on data points containing in upwards of 100k’s-5M’s or so?

I wish everything you said were true. But scanned data’s ability to bog down computers is quite infinite.

Not even considering texture.

Reverse egineering software like RapidWorks, is barely capable of streamlining things relative to what you claimed from simple Rhino script above.

I only wish Rhino could do those things.

I’m messing around with V5 atm and can’t even get a simple mesh to split or trim. It’s like a bug or something, hof.

1 Like

by ‘solid body type’ do you mean ‘b-rep’?

your second statement I like, cause I’m trying to obtain something similar, but I don’t want ‘subD’ entities added to my workflow.

I just want mesh/deviation/fromsrf or something.

I might have to resort to V7/grasshopper but idk yet…

Here’s an example of a project I did using the workflow I describe:

I’m lucky in that the vendor used on this project - Mimic3D out of LA - makes meshes cleaner and better than any I’ve seen, and will always deliver that data in 3 levels of decimation. For most of the surfacing, I simply use the most decimated data, in this case:

So in this case, that’s 1.4 million polygons. Even so, I rarely if ever feed the entire thing into PointDeviation - if I’m working on say the wing, I create a data set that is just the wing etc. As you have discovered - yes, Rhino’s ability to split or trim a mesh is very dodgy at best. I suggest instead using ExtractMeshFaces and doing a crossing selection to get what you want. This way you can create smaller data sets, reliably and easily. That being said, all of this is of course HIGHLY dependent on your system. I did that work on a mid level gaming laptop a few years back, but even so you might need some patience from time to time, but it’s still doable, especially if you create data sub sets. I’m actually in the middle of building a new system, and one reason I’m upgrading is to do more work like this.

FWIW, the absolute, hands down, best RE plugin for Rhino was VSR/Autodesk Shape, but no one really realized it until it was gone. I think they would have sold far more copies if only they marketed it to that user base.

1 Like

I’ve been digging in some research this weekend on this matter, and seeing some history.

This VSR/autodesk thing is definitely something I’ll need to look into.

The new V7 Sub-D stuff is mind boggling. I’m really happy that Rhino gives me an opportunitiy to invest time into this going forward.

I’ve always messed aroud w/ sub-d in the past when investigating different features I dreamed about.

But never in Rhino natively until now omg amazing.

I was struggling this weekend with V5, trying to analyse mesh deviation from srf. Omg, I couldn’t even split or trim a mesh.

But, I was successful with V7 within a couple minutes, got the mesh split how I wanted.

Now, I’ll attempt using V7 to create a deviation analysis.

bbl.

2 Likes

There’s definitely some really cool stuff that can be done with V7 now using QuadRemesh over laser scanned data, and then turning that into a sub-d, but realize that you’ll end up with something VERY dense, and likely not very editable. But it’s FAST, so that’s cool. There’s no free lunch with this stuff - if you want a high quality model, it takes time. If you want something quick and dirty, you can for sure get that too, But you’ll never get high quality quickly with Rhino, or really any other RE platform.

1 Like

almost forgot to mention you’re project looks really awesome,

but just wanted to say, the point count of about a million (853k points), I think it was, is not that much data – I’m saying this in terms of what I think Rhino should eventually be capable of handling, if it were to ever be able to dominantly handle reverse engineering related workflows.

It needs to be capable of processing upwards of over 5-10 million data points imo. And that’s year 2015 period expectations. Obviously year 2021 going forward should be more, by now, but took V7 so damn long to evolve, just might be another 5-10 years.

1 Like

I would fully agree that it would be great if Rhino was able to deal with bigger meshes more quicly! I think the bottleneck here is that Rhino only runs on one thread or core. There’s a test command that allows surface display meshes to be generated across multiple threads and it works well - would be great at some point for all the really processor intensive stuff like PointDeviation to be multi threaded.

1 Like

Just curious - what type of models are you looking to make?

my current project is only a size of about 5.2766827 cubic inches.

I’m planning on reducing the size further, for the next version.

it’s a die I built about 14 years ago. It’s been in production of bending spring wire, and has cycled well over a million times.

I only have about 10 of the 14 years of production data so I don’t no the exact cycle number.

At any rate, I’m mostly wanting to compare my 3D scanned data of the die, with my original 3D model, so I can have fun seeing what these people did to my original machined die :rofl:

1 Like