Quality of NURBS from the mesh

Hello, what is the quality of the NURBS received from the mesh? Zebra, curvature? And what are the problems in this case?

Did you maybe forget to add a file or screenshots ? Kind of hard to answer without further information.

Yes, everything is fine, I know people convert mesh to NURBS, I’m just very interested in this process.

The process is called Reverse Engineering:

1 Like

Clearly, what if you use a special software, where the process is automated?

http://www.rhino3.de/design/modeling/reengineering/ this says it all

generally spoken, during the last years there is trend of expecting automation to fully replace manual work. The truth is, automation simplifies boring tasks and speeds things up, but one click solutions will hardly yield quality output. This is true for reverse engineering as well. Its a compromise. If you can life with reduced quality, go for it. But using Mesh-to-Nurbs algorithms to get high quality Nurbs is rather contra-productive, because fixing the output is much harder as doing it right from scratch on.


There are some applications that can automatically create NURBS surfaces from point clouds or meshes. Unfortunately the results are usually not acceptable. These NURBS-models can hardly be edited, cause they consist from many small patches and lot of surfaces with singularities.

Ok, there is a process of creating NURBS after scanning. And if I have a mesh from SubD?

then you will get a Nurbspatchlayout based on that. With all pros and cons. But whatever the result, the patch layouts I’ve seen, are all nearly impossible to edit. A second thought: Mesh models/data contains error. Usually a task of reverse engineering is to improve this. Its wrong to assume that a surface has to be in exact position. Deviation is good in certain areas, in other areas its not wanted. So like always it depends on the problem. This thread is a bit generic, because without any real world problem its impossible to talk about “quality”


It is possible in more detail about this?

Imagine a front bumper of a sports car. Usually in automotive design you do clay modelling. A craftsman does model 1:1 or 1:4 scaled models. After modelling, this clay model is getting scanned.
Now if you get the data you already have a good model. Professional clay modeller are producing astonishing models. They even have real zebra analysis. However, there are probably still many bumps inside the model. Its simply impossible to clay sculpt perfectly. You cannot “zoom”, you have reduced analysis capability, no symmetry, there is material deformation. No matter what the problem is, a cad model can get much more accurate. Now it would be absolutely stupid to say, just put surfaces on the scan and the job is done. Instead you want a better model based on that scan. So you have to decide which part of your model needs to get overworked, and which part has to stay as close to the scan as possible. This is something an algorithm can’t do, because algorithms don’t have “taste” nor they know where to diverge from the scan.
Another aspect is data reduction and edibility. If you reverse a shape, you always want to have as few patches and controlpoints as possible. Because you want to be able to modify the bumper, in order to react to technical or design related changes. Well made surface models, are much easier to modify and are usually of higher quality. Creating a good surface model, is very complex and usually based on years of experience. It would be incredible hard to teach software how to do good patch layouts.

here is an example for a deviation in order to optimise the highlight. You clearly see the bump from the scan. See how clean the patch edges are

1 Like

Uh, look at those awful flat areas and bumps in the highlights ; )

Sorry, could not resist : )

What you say about reverse engineering from polygons is right of course.

well, its not optimal there, but this surface is part of a difficult area. Its also not the final part. However sometimes you need to hold outlines it has to look good from many perspectives and so forth. You might agree that with that bump it will look much weirder

this is the same surface with z-Highlights:

Hey, I was just kidding; as a part-time surfacer, I just can’t help myself : )

1 Like

Well, the transformation of the model into NURBS (and parallel improvement). With the automatic approach often turns out nonsense, which can not be edited if desired. For example: