NURBS vs MONITOR?

everybody here working in and on Rhino i am sure
has at least heard the word once ············· NURBS

BUT

what does that actually mean for what is being displayed on my monitor? and
why would it not be possible to display them without ugly polygon conversion?

i am hoping that somebody having a bit of a deeper insight could explain that in a nutshell for dummies like i am. what i know so far and feel free to correct me or to amend your knowledge:

generally NURBS are describing an object through mathematical mostly parametric curves and surfaces which theoretically are infinitely precise at least in what they can display according to their parametric limitations. also having limitations in the practical world with our computers and processors of course.

Meshes on the contrary to NURBS describe an object through vertices which are connected through straight lines only, hence the lack of precision as we “know”. one can make triangles smaller but it would never reach the resolution of NURBS in this matter. at least so it has been said somewhere. dont hit me now for missing quotes pls.

what we see on our monitor while working in Rhino, is basically also just a polygonal representation of our NURBS. why is that so? is there no other way to send data to the graphic card? and what if there would be a way, would it not save the double effort of calculating internally NURBS while having to remesh every change bothering processor, memory, graphic card and last but not least our esthetically sensitive eyes with this?

that everything which is being displayed at least on our monitor has to come down to pixel and sub pixel is clear to everybody i believe. but why the extra polygon conversion… why why why :slight_smile:


Simply because the drawing API used (OpenGL) works with meshes. Ultimately all geometry gets converted to a bunch of triangles. If not in the program then at least on the hardware doing the drawing.

Edit: excluding lines and dots, although in wip the lines are now triangles too.

/Nathan

OK. Rhino has done the display generation in the CPU since the beginning - way back before video cards even had anything they called the GPU.

Is it a fair question to ask whether the display mesh should be generated in the GPU these days? Certainly passing the NURBS to the GPU would involve several orders of magnitude less data than the complete display mesh, wouldn’t it?

hi @nathanletwory thanks for chiming in.

well yes good old OpenGL, but is that maybe also a bit of a lazy excuse? :slight_smile: or is it a security reason, for not gaining ability to echo screen data from users and since OpenGL has this already implemented i believe, at least so it seems so why not being … ok with it?

my knowledge about hardware software communication is limited but if it would be theoretically possible to skip mesh all together for the sake of our always too limited physical hardware resources, would that not have a lot of performance advantages?

somebody around here once said very recently, that you guys develop everything from scratch, not acquiring any other codes. now whether thats true or not, but why not creating a new API for graphics which would skip the double efforts for our computers, would that be too far out? has it ever been thought of, or is it just mathematically not possible to feed the NURBS data into a card?

you say all geometry gets triangulated at the latest in the hardware, but thats also just because the drivers of the cards work like that. its still just bits and bytes, so maybe if that could be rethought one day? new generation graphics and finally we can say bye bye to bad mesh quality would that not be something to strive out for?

@AlW i cant remember well, are you talking about graphic cards prior to the 90s or even earlier?
but there was no Rhino then yet :smiley: … also dont know much about its former GPU usage i still have the feeling it does not use it much :slight_smile:

FYI, I have been working on the new Raytraced made in current Rhino WIP. This mode is based on the Cycles render engine, from Blender 3D.

That would effectively be writing an API and drivers for all the GPU cards out there. Although an interesting idea I don’t think we have the resources (time or manpower) to do such a huge task. I’d put this in the category far out.I could be mistaken of course :slight_smile:

I don’t think that is the problem. But then again my knowledge in the NURBS area is limited at best.

It is possible to create for each pixel in a viewport to determine using the NURBS descriptions of all active objects which objects are aligned with that pixel and the corresponding “z” value (distance from the viewpoint). This information can then be used to determine what is seen on the monitor. If the RGB values depend on the angle of the surface at that pixel then the normal direction of the NURBS at that location will also need to be calculated. Doing so is much more (potentially orders of magnitude) computationally intense than the equivalent operations for displaying a mesh defined by a set of vertices. Iterative calculations involving the NURBS geometry would be needed because general NURBS equations can not be directly inverted efficiently.

Also an efficient algorithm for doing the calculations using the NURBS descriptions would probably start with a mesh to obtain a preliminary answer, and then further iterative calculations to refine the results.

Modern GPU cards are amazing, but they still work at a finite speed.

That is possible and voxel rendering was a hot topic in the '90s. I haven’t read up on this but apparently some companies think this is still a good idea - Why voxels are the next “big thing” in graphics.

I’ve also read that things like Minecraft are voxel-based but need to convert to meshes - I dunno…

the only question how much of the onboard chipset which stores algorithms and what ever else to make things faster would have to be dumped doing so… i sure understand that all graphic cards raced into that direction somewhen in 2000 a bit before maybe.

but it still does not change the fact that for what we see on our flat screen, mesh has to be addressed which then complicatedly enough -if low poly or not - gets phonged to give an impression of smoothness. a cheat with a very negative side effect of calculating something extra, to cover up polygons which we are not even supposed to see :smiley:

this sounds interesting, but you say it would make more sense to calculate one “area” in polygons, lets say one side of some volume with a curved geometry which seen as pixels displaying a gradient of colors, which has to be split into polygons with this additional step of dividing it first?

can you explain that a bit closer?

well it seems to be a sort of a new old school thing, giving it an old look which has grown popularity recently since some game… i have read that for special effects like smoke, voxels are fancied due to their grainy look.
but i am not sure if thats the answer… at least not 3d voxels if thats what you wanted to point out.

somewhere during further read something maybe interesting came up going by the name raycasting which sounds like an interesting approach seen here for example… interesting read about voxel vector pixel by the way :slight_smile: substantial for a quick read in.

I’m talking about the cards that made the graphics cores available for apps to use - much later than the 90’s. Also when CUDA and OpenCL became available.

yes that should´v happened around the turning of the millennium from what i read.

i also have read up that CUDA/OpenCL can do shading even without touching the hardware drivers up to the point that the graphic card can be used as a mere coprocessor. that would be maybe an idea for what i am talking about.

now if that could be fed with any data coming from inside rhino to compute NURBS, maybe it would be a smooth way to display surfaces, lines, anything, without having to go through polygons.

any thoughts? @davidcockey @nathanletwory

I asked the internet, and was given links to PDFs like this and this

1 Like

So it’s all been worked out on the internet and all you and Jeff need to do is code it up! :smile: So will it be in the 2nd December WIP? :wink:

I happily concentrate on integrating Cycles as the Raytraced mode, crunching out pixels from polygons… :slight_smile:

/Nathan

Edit: add smiley

We have thought of this approach and may be able to implement it for curves in the future. It has been thought of, but it is

  • only going to be possible on certain systems
  • very difficult to do right (and debug)
  • in many cases probably not as fast as just drawing triangles

No doubt.

Hard to accept, as we all regard you folks as some kind of computing gods.

You referred to implementing for curves, but the part of this topic that got my attention was regarding converting nurbs to pixels in the GPU as described in Nathan’s first PDF link. It implied a 10x speedup could be had.

I believe that was a 10x speed up by doing NURBS calculations in the GPU compared to doing NURBS calculations in the CPU. Doing NURBS calculations is still much more computationally intense than doing triangle mesh calculations, even if both are done in the GPU.

As the render mesh is made finer and finer a point may be reached where NURBS direct calculations will be faster. But that may be past the point where refinement of the mesh improves what is displayed due to the finite size of the display pixels.