Simple surface from simple planar curve isn’t planar

I have a weird behavior sometimes, that I cannot reproduce unfortunately.
This simple plane Surface was made using the rectangle curve and planar Srf after that.
The weird thing: It should usually always create a planar Surface.
The curves, aswell as the surface points are on one plane, but somehow when i click the surface it shows the scale box on the axis that should not have any deviations according to the curves and points:

When I rebuild the surface the issue disappears, so it isn´t a big issue but I remember I had this before and it was annoying for that instance.
Problem Surface.3dm (142.4 KB)


1 Like

The problem comes from the rendering mesh. Run the RefreshShade command and it will be fixed. Or, alternatively, go to “Rhino options > Mesh” and then change to another setting there, then revert to the original setting.

1 Like

And if that doesn’t work then this might help:

1 Like

The odd thing is how it got that way… would be good to know. Interesting that it is getting the scale handle info from the render mesh bounding box and not the object itself which is perfectly planar as far as I can see. Pretty much anything fixes it, a SetPt in Z for example… I guess that refreshes the render mesh as well.

1 Like

This can be very problematic for inexperience users, as they don’t realize every transformation to surface entities are being meshed and remeshed in the background to display the shading.

And the shading is used during various commands that users rely on, such as joining in attempt to be a water tight poly surface. In which case can lead to domino effect that can interfere with success, depending on file tolerance and Rhino’s ability to seal things up with the given mesh shade settings.

Hence “The RebuildEdges command restores original 3-D surface edges that have been forced away from the surface through editing.” I’ve found this rebuild edges to be more helpful than refresh shade.

If the edges are messed up, then refresh shade is pointless – imo.

No, you have it backwards. Joining relies on the file absolute tolerance, and once the edges of an object are joined, the mesh is created using those. The display mesh settings have absolutely no influence on surface joining. The file tolerance settings of course do.

That sounds correct, and is also my point.

Then if you explode the polysurface to do some work on something, then rejoin the surface back into a polysurface in attempt to obtain a water tight polysrf – you’ll potentially encounter a domino effect of the render mesh becoming evermore corrupt from the original polynomial equations they’re attempting to represent.

Therefore, refresh shade is pointless without rebuilding the edges back to original.

This is simply not correct. If you ignore this, over time where you may have issues with naked edges in a particular polysrf not becoming closed, then the actual error can domino effect, and be more difficult to correct without actually using the rebuild edges command at some point.

This is correct. While, the render mesh can interfere with the composition of errors associated with polysrf’s that fail to close, and the nature behind where and how they’re not closing.

If the user doesn’t rebuild the edges of a problematic polysrf that wont close, then they wont see the true reason behind the naked edges for said problem.

Hence, render mesh settings do in fact have an effect on said issue.

Technically I could demonstrate this without a polysrf being closed. so that’s not really necessary.

It’s just about whether a polysrf can be joined without naked edges inbetween.

Obviously, some users desire to obtain closed polysrf geometry that’s watertight, and this can lead to tweaking certain parameters like file tolerance or render mesh quality to workaround bad modeling techniques to cheat and obtain a “water tight” result.

So, that’s why I mentioned it.

But the issue of polynomial geometry being represented by mesh geometry for the sake of ‘shading’ it, ultimately does have an effect on the resulting shade over time as the shade changes due to transformations over time.

Hence, one of my common sequences I will do to geometry that I get from a client to see the truth is the following:

I will change the absolute tolerance to 0.0005", I will change the mesh quality to very high quality (I can share specifics later), I will then explode the geometry, rebuild the edges, refresh shade, and rejoin to see if the geometry is closed or not. If it’s not closed, I will then show edges and see where the naked edges are.

This process takes a few seconds, and immediately reveals to me where the geometry is bad relative to not being water tight.

I also will model this way from time to time to insure that I don’t get leaky geometry associated with Rhino’s nature to allow ‘surface edges to be forced away from the surface through editing.’

I have not read everything but setting the Gumball Aligned to Object does not show a blue scale handle. I would be worried if it did.

With the Gumball aligned to World, I scaled the surface with a factor of 1000 on the blue scale handle. It changes the Z coordinate of all four points.

This leads to my assumption that the cplane was offset ever so slightly above the World top plane.

I did that too (actually by 1m) and had the same result - they moved up to Z=2.13692001164. But the blue handle disappeared when I did that, and all 4 corner points had exactly the same Z value about as far out as the display would go - 11 digits past the decimal point.

1 Like

Yes, it is. The proof is one can work entirely in wireframe or with virtual geometry such as in RhinoCommon without meshes having been created for the geometry in question and the results will be exactly the same if all the operations had all been done in shaded mode.

The display meshes come after the NURBS geometry has been created, not before. Display meshes have zero effect on joining or otherwise modifying NURBS surfaces. Joining or otherwise modifying NURBS surfaces does have an effect on their display meshes.


That would be weird.

Pretty sure I didn’t say it came before.

Maybe I’m thinking mostly about the edges then, which may be effected by the display mesh quality and file tolerance.

I agree? In fact the display mesh will update after each transformation.

Therefore, after a NURBS entity is created – after that entity is changed, the mesh updates or rather changes to adapt to each change per say.

Imo, it doesn’t do very well, hence the need to run the ‘refresh shade’ command. Even though that doesn’t fix the messy/meshing edge problem.

Hence, the need for the ‘rebuild edges’ command.

You can think of a modified NURBS as a new version of the original dynamically over time becoming a new permutation of the original.

Therefore you’re right, the display mesh comes after the change/edit of the NURBS.

So, explain why this happens:

“Joined and exploded polysurface. Edges are pulled away from the surface.”


Are the “edges” pulled away cause the file tolerance is controlling the quality mesh settings thereby changing the display mesh, and changing the outcome or ability for the join process to succeed or fail relative to naked or not or closed or not?

And if a user works in ‘wireframe’ do the “edges” get pulled away? :thinking:

Yes, try it and watch the edge move.

1 Like

it’s pretty weird, run boundingbox on the plane, then refreshshade, then boundingbox again

should boundingbox care about render meshes when given only a surface? because the render mesh of the resulting bbox polysrf ends up with a 4.27384e-06 in it, and the polysrf has volume

if boundingbox does not consider the render mesh, then this 4.27384e-06 must actually be in the plane, but cannot be accessed, as any operation at all cleans it up

1 Like

Hi @sebastian.dennert,

Select your surface and run the What command. You’ll see that is it a plane surface, which is what you want.

– Dale

1 Like

The edges are pulled away because Joining takes the input edges and averages them to make the new edge, “on the surface” or not doesn’t matter.

1 Like

Hello- I am not sure what you mean by this… render meshes do not have any effect on whether objects will close up or not.

The meshes are derived from a sampling of points on the surface(s) and then further refiined - certainly they can be ugly if the input objects’ edge tolerances are far off (for example after using JoinEdge- not a good idea unless you know what you are doing ) but this is a symptom and not a cause of the edge problem.


I’m pretty sure I’ve done that before on multiple occasions.

But I’ll definitely do it again.

this “new edge”, is it based on polynomials? like the original NURBS? or is it linked to the render mesh?

Of course err body saying render mesh has nothing to do with it, but I’m just trying to understand where Rhino decides to ‘split’ and/or ‘move’ NURBS (surface) edges…

Cause I still think render mesh has an entangled effect. But I’ll keep looking into it.

I’ll be looking more into this and get back to you, cause I’m still not 100% on board with that claim.

Maybe I just need more understanding of the NURBS “wireframe”.

I agree.