A Better Modeling Process for Narrow Sharp Edges?

I could use some much needed suggestions to improve my modeling workflow. What I’d like to do better, is to create clean geometry in NURBS, the first time, rather than constantly rebuild or clean it up later after conversion to mesh. I think I’ve picked most of the low hanging fruit when it comes good modeling practices, but there are a set of recurring problems I keep coming across for which time, patience, and dedication seem to be of limited effectiveness. I’d like to better understand better what’s going wrong.

I don’t want to spend my days ‘milking mice’ tracking down minute errors, but that’s most of what I’ve been doing lately. The models are for jewelry designs, smallest features at ~0.01 mm, my precision is set at 0.001. I’m working on fairly small objects at tight tolerances.

Probably the biggest roadblock I’m running into is the types of forms I am trying to make at the scale I’m making them. The designs often have small, sharp tapering forms, often running in parallel, or fanning out/ converging to/ from a single point/ pole (long, narrow, ending at points):

I know, I know, avoid poles. However that’s not the issue (I think) I’m really having. It seems to be the precision of the surfaces is breaking down a lot beyond a certain scale any time I have a sharp edge. The curve/ surface resolution just doesn’t seem to hold up at the smallest scales.
Sharp edges require thin surfaces. At the thin edges they’re breaking down and they’re not playing nice with their neighbours.

I consistently run into the same kinds of issues: poorly aligned isocurves that twist at the ends of surfaces causing naked micro edges that won’t join and/ or cause bad surfaces especially when the angles are very acute. The source curves seem fine. For example the following geometry was generated from a two rail sweep:

Note the isocurve crossing over the edge at the end. The problem was only at the end (<0.05 mm across), but the little twist kept the geometry from correctly joining with the surrounding geometry/ naked edge. In this case it was relatively easy to fix, but it wasn’t easy to spot until after all the surfaces were joined. I could be completely off base here but anecdotally it seems as if Rhino’s geometry generation perhaps gets wonky at the smallest scales. I keep seeing things like the example above. I’m not sure how to avoid it.

It seems to be a bit of a nail biter something if fillets are going to work. This little corner piece turned out ok after a number of tries, but with long narrow edges I seem to get consistently hung up on step three:
tolerance.3dm (1.1 MB)
I find often, even though the fillets are generated from the source geometry, the don’t align correctly. Manually extracting curves (_Intersect), extending, exploding surfaces, trimming, and re-joining gets very time consuming (only to find out it creates micoloops/ small naked edges when trimmed and joined with its neighbours (next example).

In the case below, the naked edges could not be fixed with the join edge command or testRemoveAllNakedMicroLoops - I prefer only to use either solution as a last resort (fyi, the naked edge loops are 0.0019 mm across):

Visually I couldn’t see anything wrong with the source curves so I wonder if the error was an rounding error accumulated from the type of operations with approximate solutions(?).

Sometimes I find surfaces generated from things like trim generated from projected curves can be slightly misaligned, case dependent. I try to avoid operations like trims, curve projections or fillets when I can, but sometimes I need to a few of these and my geometry won’t join or generates naked edges that are so small I can’t work on them to manually fix them because of camera clipping - I can’t get any closer or the camera controls are too sensitive for practical purposes at that scale (for example, a microloop or surface intersection 0.005 mm in diameter - my precision is set at .001 mm). In any case if I’m that close I’ve done something wrong and I don’t want to be that close anyway. I actually wondered today if I should set my units to microns so I can get closer

Sometimes these errors cause bad objects when joining two closed intermediate complexity polysurfaces (I’ll test join smaller surfaces to see if they’re watertight first, explode them, remove a common surface that each of the objects have been created from, and finally join the two smaller polysurfaces into one big one). It’s strange that two valid, closed polysurfaces objects won’t join when the identical common piece between then is deleted (relative tolerance? - the concept, not the setting, i.e why it’s good to shrink trimmed surfaces).

I’ve been working with reasonable precision settings given the scale (I’ve tried a couple of different settings, FYI, in different files, so I’m not switching precision back and forth in the same file). I want the correct amount of precision without creating too many control points/ heavy files.
I often use sweep 2 rails to generate the base surfaces, trimming surfaces against each other, surfaces blends, lofts. Pretty standard stuff. I’ve only been using Rhino since the new year, but I am a fairly experienced modeler in other packages with some previous NURBS experience. I’m always learning, but I think I’m almost unreasonably careful about good modeling habits: snapping, using surfaces rather than extracted curves as rails from trimmed surfaces; using un-joined clean surfaces as much as possible for source information, alignment of cross sections & rails; I work with CheckNewObjects on and show edges; etc., etc.

But as when I create designs with these sharp tapering forms the issues reoccur frequently (I know, I know, the obvious answer is don’t make sharp, tapering forms or at the very least be even more careful). I’m trying to find the right balance between the design and the fundamental limitations of the software (trying to make triangular forms in software that prefers rectangles).

As far as I can see I’m working to the highest level of precision the modeling process will let me (i.e. everything is snapped and as precise as I can make it, at the limits of what the camera will let me see), but it still doesn’t seem to be enough to make consistently good geometry. Even though the objects I’m working on are only a few centimeters across, the errors are miniscule (but unfixable without pulling the model apart and doing a rebuild).

It feels odd that the error recognition/ calculation appears to be precise than the geometry creation process (i.e. I get naked edges half the size of my minimum precision, perhaps the calculation is based on a radial tolerance?). Maddeningly it feels like the software is precise enough to recognize the errors but doesn’t allow me the control to make clean geometry or to fix errors directly afterwards. The models I’m making are only a few centimeters across, but the ‘unfixable’ errors are one screen pixel across if at full screen model width (*unfixable through direct manipulation, rip it apart and start over). They’re not that easy to spot until after geometry is joined.


Any tidbits would be appreciated:
for example:
ex. When CheckNewObjects indicates you’ve created bad geometry, besides merely extractly bad geometry (ExtractBadSrf), is there a way of highlighting on the model the bad edges the tool gives you in the feedback?
ex. How does angle tolerance affect object snapping on small objects with acute angles?
ex. Anectdotally, it seems to work better to generate fillets on un-joined surfaces regardless of tangency.
ex. In a two rail sweep, making your cross sections consistently perpendicular to the rails creates better end geometry
ex. Trim works better on un-joined surfaces. Explode surfaces, trim, re-join.
ex. template file: Large Objects - Microns? ;p

A very well thought out and presented description of your issues. On behalf of all Rhino users everywhere: I thank you.

1 Like

Hi Andre - looking at your ‘tolerance’ file, it looks like filleting (FilletSrf), intersecting, trimming, rejoining is a fine way to go. You could use a couple of shortcuts, maybe- for instance with the starting surfaces- if you copy the base plane up to the top of the wiggly shapes, you can select all the surfaces and have them trimmed to each other using CreateSolid - then explode and do your surface filleting.

Not really - in general, I find simply Untrimming (KeepTrimObjects=Yes) & retrimming usually fixes things.

not at all- angle tolerance is what Rhino uses to determine if for example, two curves are tangent ‘enough’.

I’m not sure what you mean here. If you are using FilletSrf and not FilletEdge, it may simply be easier to deal with all the ‘by hand’ intersecting and trimming that you may encouter if the thing is exploded.

Yes - keep in mind you can true things up using the ‘Add slash’ button in Sweep2. Also, sweeps to a point, of curved cross sections, tends to make the curvature of the surface as it approaches the point shoot through the roof- this can be a little tricky to deal with in some cases- it can look bad and make surface matching diffiicult.

This is true - it should not be, but it is. Also, remember the Crv filter for curves- very handy when trimming with surface edges like the edge of fillets. Often when the intersection of two surfaces is tenuous, as with a fillet edge that comes just tangent to a surface, using the surface as a trimming tool may fail because the intersection curve of the surface that ends just on the tangent to the second surface may be gappy. So, when prompted for the trimming object, if you type in CRV and Enter, only curves can be selected, so you can click on the edge of the tangent surface and the edge, not the surface will be selected- so no dodgy surface to surface intersection is needed- the curve will be used to trim. Try it…

For these objects, I’d set the file tolerance to .0001, not .01. Then, when you get to the end and some things do not join cleanly, you can safely back the tolerance out to .001 (your actual desired tolerance if I remember right) and some or all of the joins should work.

I’ll read through the rest of your note and see if I can offer my 2 piasters worth on anything.


“The models are for jewelry designs, smallest features at ~0.01 mm…”

At 0.01mm feature size, you are not designing jewelry. Even 0.2mm, which I would suggest as absolute minimum, is optimistic, but at least within the ballpark.

Besides that, avoid anti-tangent, or nearly anti-tangent curves. Limit yourself to between around -165/+165 degrees, at the very least, or end with chamfers or rounds of finite width. In reality such slivers, whether incised or proud, will never survive printing/injection, casting, finishing.

@ Pascal Wow, thanks for the help Pascal. I really appreciate you taking the time to answer. The information helps a lot.
The bit about remembering the crv filter is gold:

Some clarifications:

Just to be clear, setting the angle tolerance smaller/ tighter would give me more accurate intersection osnaps in a case like below where the angles are very acute?

Sorry, bad lingo on my part. What I probably should have said was generating fillets on surfaces seem to work better than on polysurfaces regardless of tangency (of the two joined surfaces that make up the polysurface). I found through trial and error that it’s better to have surface created from a single curve rather than a joined curve (i.e. polycurve). As in the case in the image, the surfaces were generated from a two rail sweep. The source for both rail curves for the highlighted surface on the left) was created by a G3 curve blend, then trimming and joining. Theoretically the resulting surface should be good source data for the fillet. But it seems more often than not it isn’t. So I usually add in the extra step of rebuilding the curve before I generate the surfaces and it seems to work better (highlighted surface on the right). I find otherwise I can get the dreaded untrimmed fillets with the extra pieces for each surface that needs to be manually trimmed.

I don’t like adding the extra steps in to the worklflow though. It seems to be ‘extra steps’ is where I lose fidelity/ accuracy every time I apply an operation that involves approximation (plus added chances for human error). Since I don’t know what operations involve approximations (as opposed to absolute calculations), I try to avoid them. I generally try to remove extraneous variables from the modeling process. The trick is knowing what is extraneous.

I had a couple of additional questions. I was trying to distill some of the key questions from my mega-post.

  1. Using the RebuildEdges command, is this effectively the same as re-trimming at a different tolerance?

[In other words, if my system tolerance wasn’t high enough, and I was getting naked edges, I could explode surfaces, RebuildEdges to a higher tolerance, and re-join. This would potentially improve the situation. My understanding of a ‘brep’ is that it a surface with the nurbs trim information attached. So running the RebuildEdges takes the original surface and reapplies the trims but to a higher tolerance. The trim itself is a nurbs curve calculation, correct?]

  1. In some cases does the relative size of the surfaces being worked on cause a loss of resolution?
    System tolerance setting 0.01
    Case one. 1 mm long surface. Trim 0.1 mm. To tolerance, as a decimal, the trim is 0.10 relative to the surface.
    Case two. 100 mm long surface. Trim off 0.1 mm. To tolerance, as a decimal, the trim is 0.00 relative to the surface.
    Case three. Same as case one but 100 mm away from the world space origin. I have no idea what the math is here, but does the distance from the origin break down the resolution in any way?
    In my previous career working in the games industry (not a software engineer), in open world environments, where the world was quite large (i.e. 2 km radius), in some case animations, positional calculations, etc. could break down near the fringes of the world (keep in mind games need to run in real time and data disc size/ compression/ streaming is an issue).
    In any case changing the resolution from 0.001 to 0.0001 probably fixes it.

Hi Andre -

No. The Int Osnap is not dependent on Angle tolerance that I know of.

Sweep2 uses, behind the scenes, a refit version of the input curve as a rail (except in the cases where a ‘Simple Sweep’ is possible and the option is checked by the user). So the exact structure of the rail curves is not reflected in the sweep surface. However better curvature continuity (CurvatureGraph) is certainly a good thing. In other cases, like for the cross sections of a sweep, or for Loft, or EdgeSrf (but not for NetworkSrf) the structure of the input curves is (or can be, depending on options) used directly in the surface, so having matching curves is a very good thing (e.g. make all cross sections to a sweep or loft from edited copies of one of them- not always possible but a good way to get a clean loft)

No. What this does is ‘reset’ the edges back to the surface. For example, Make two surfaces that are some small, but significantly larger then tolerance, distance apart… JoinEdge to force the surfaces to override the tolerance and join (generally not a good idea, this is just a demo!) Notice if you zoom in that the surface isocurves may no longer hit the joined edges - the Join has pulled the edges off the surface. Explode and Join -works, right? Now Explode and Rebuildedges - now the edges pull back to the surface and Join won’t work any more- out of tolerance. The edges are better, in the sense that they are on the surface again, but it is not the same thing as tightening tolerance, untrimming, intersecting, and retrimming - that is a better way to get good edges.

It should not matter as far as I know. Movinng stuff very far from the origin can have an effect, but not 100 units- maybe 10,000 +


Features probably wasn’t the right term. I was thinking more along the lines of the absolute limits of the voxels/ the finest resolution I could cut on a CNC. I’m purchasing a CNC machine in the next few months with a positional resolution of 0.003 mm and repeatability of 0.01.

In some cases I’ll be using the machine for carving wax in which case you are correct about the the minimum feature size. But other times I’ll be using the machine to carve materials directly like mammoth ivory and gold directly, no printing and casting required. I’ve seen the results directly and the results are astounding. In the second case it’s only the polishing step I’ll need to consider, and not much at that.

I am trying to work out the tolerances and modeling methods that work across a range of cases including the odd extreme cases. While I may not always work at the limits, it’s nice to have things hold up when I do. That’s why I am focused on the absolute limits of what is possible. I plan on putting my machine to good use.

I’m also trying to take into account some limited re-usability of the designs (like using elements from a ring for a bracelet).

EDIT: Adding some pictures results milled from NSCNC 4-axis Evoke and 5-axis Mira CNC machines (thanks to André Zverev at NSCNC for giving me permission to re-post the pictures, just to be clear not my work). This gives a better indication of what kind of tolerances I’m trying to hit.

Direct material milling:

1 Like

@Pascal Thanks, that helps a lot. I’m looking forward to seeing how this new information plays out in my modeling process over the next few days/ weeks.