Audi CES VR and why Rhino needs to handle complex models in 60 fps

VR is coming at full speed and AUDI has worked with VR for decades, now they take the step into virtual showrooms for their sellers.


And this I hope help McNeel to see the importance of realtime handling of really complex models with fast OpenGL with all the fancy eye candy it can muster, at the highest framerates possible. Realtime IS the future, fast iterated render engines with real light simulation are great for stills and animation, but does not do the job when 60 frames of completed images per second is needed. So please look into the crystal ball and lay the path now for how to accomplish a fast evaluation mode in realtime.

To me it would make sense to use blocks and nested blocks for this, so they can be handled as “static” objects in the same way 10 000 meshes joined into one is much faster to handle than 10 000 separate meshes. If blocks could be used like this then it would be easy for the users (ux) to manage what is static and what can be modified.

1 Like

Hi Jørgen,

There is nothing wrong with wanting all the possible eye candy at full speed. What I disagree with however is the need for Rhino to catch up with this, and certainly not without a major increase in pricing.

At the current state it’s most important to optimize the realtime rendering of complex models without the eye candy. If I’m not mistaken, speed bottlenecks are still the RT rendering of (many) blocks, large curve-sets and complex scenes in general. Rhino should provide good speeds for mid-range GPU’s, asking for 60fps realtime VR experiences of complex scenes is orders of magnitude away from that.

The VR experience Audi is bringing to their showrooms is highly specialized and optimized software with models that are packed with trickery to get the fast and fancy results. No way you can start tweaking curvature on the side panels. So IMO this is no fair comparison or something to aim at.

Your suggestion of a different handling of the render meshes is probably a little too simplistic and might not be all that easy to implement for a scene with volatile geometry.

However, and maybe @jeff can shed some light on this:
From what I understand a lot of overhead in the viewport rendering has to do with how the structure of rhino objects is setup and the need for it to be update whenever objects in the model change etc…
Could there be a way to create this ‘static’ model Jørgen is referring to and render only that in a special presentation/evaluation mode type display. Where even curves are tesselated for instance…
But I probably have ignorance on my side when thinking this is an easier solution.

-Willem

I agree that scene handling at high speed comes before eyecandy, but eyecandy is key to aestetic evaluation, so it needs to be in the loop from the beginning. That said Rhino has all the eye candy it needs, but needs to handle in example 20 static lights in a way where it doesn’t recalculate the shadowmaps for all of them for every frame even when they are not moved or otherwise altered.

Here is a simple test script I just wrote for the possible Holomark3. It makes a simple branch out of a cylinder and a sphere and assembles 3 of these into a tree, then populates 100 trees (a total of 300 spheres and 300 cylinders arranged in nested blocks) and runs testmaxspeed and calculates fps. It is a rough hack, so it does not take into account viewsize nor possible edited display mode.

H3 Forest test.py (1.4 KB)

(PS regarding meshing and complex files, take a look at this from 2011, done by a student https://www.youtube.com/watch?v=F9oLyOwbFp0)

We agree,

As for you test script some numbers for 100 frames with TestMaxSpeed (manual run):

Shaded mode ( with edge curves)
42.36 sec : all blocks
28.50 sec. : exploded blocks

Rendered Mode ( no curves)
15.70 sec : all blocks
10.06 sec : exploded blocks
8.42 sec : extracted rendermesh joined into 1 mesh

-Willem

Edgecurves? As in edges + isocurves? (default, unaltered shaded mode?)

I ran testmaxspeed manually as you did, here is my data:

27" monitor 2560x1440 , Default 4 view, large objects millimeters, 0.01 tolerance.
Win 10, Quadro 4000, dual xeon.

Shaded mode, default:
8.66 sec: all blocks.
9.23 sec: exploded blocks.

Rendered mode, default (with shadows)
30.03 sec: all blocks
20.78 sec: exploded blocks
12.64 sec: extracted rendermesh joined into 1 mesh (3.162.000 polygons)

Rendered mode, default (no shadows)
20.09 sec: all blocks
9.63 sec: exploded blocks
2.73 sec: extracted rendermesh joined into 1 mesh

this VR stuff is currently going to be primarily limited to consumption… for others to view our models in a ooh&ahh way. (and i’m not knocking that btw… will be cool)

i want to put those things on (well, not those exact things… they’re too big and goofy looking right now :wink: )… and model.
except when i put them on, i won’t be able to see my keyboard.

i don’t know… yes, VR tech is very important and very exciting (to me at least)… but we equally need new technology/methods for how we interact with the computer prior to these things being able to be used practically/beneficially during the design & modeling phases… because the keyboard and mouse isn’t going to cut it.


(just opinions and points for sake of discussion… i don’t feel incredibly strong about much of this even though my tone may have implied otherwise.)

2 Likes

I need a new laptop :wink:

-Willem