RhinoRender vs OpenGl vs Cycles

Hi guys, I think cycles (Raytraced viewport) is really coming along well, but it doesn’t handle reflections, refractions or even the groundplane correctly yet, where transparent objects are worst right now.

The speed is great though, and 500 iterations seems like a good default.

@GregArden why is the groundplane grayer? And it seems like it doesn’t get the same lighting as Rhino render’s groundplane does.

When you get these things fixed I really hope it will substitute RhinoRender as the default render engine as well, Rhinorender took 6 minutes to complete that image while cycles did it in 28 seconds. (is that due to a 4 core i7 vs 970 gtx?)

Oh, and it doesn’t support SubD objects either, I had to extract the render mesh from that bottom object for it to render. Same goes for RhinoRender.

Did another test with an all too familiar file:
(just download holomark from food4rhino and extract it from the installer if you want to play with it)

@jeff it seems like OpenGL has too little reflection on transparent objects, in real life an object can still have reflections even though it is 100% transparent due to the fact that reflections happens on the surface of the object, and not in the material, it will just reflect light though, not dark areas.

I see that Rhino Render also has too little reflection when transparency is high. The setting for the material is 92% transparency and 100% reflection.

Reflections and refractions I am completely aware of. At this moment I am reworking the Cycles integration into the viewport from the ‘old’ display pipeline architecture to the new conduit system created by @andy. This is coming along fine, I estimate that to be complete sometime this week.

You may have seen my posts (here, here and here) about recreating and improving the material definitions using the Grasshopper plug-in I’ve been working on. This plug-in I am creating to tackle exactly the problems you are describing.

Some technical background:

Rhino has a complex environment with no less than three different types of background in one

  1. background as seen directly from camera when no object obscures the background
  2. reflection/refraction background, which is seen in the reflections of items or when seeing through an item
  3. the background that is used as the skylight.

I have that currently implemented here.

The custom material is even worse, where I have to simulate that entire thing in one HUGE shader. Both the material and environment shaders have light path nodes in them to enable and disable parts of rays that bounce around in a scene. For the custom material that means ensuring only part of the rays are evaluated for proper background reflection, but the interactions between those make it so that currently some of the parts seem (are) broken. To test those is quite a difficult challenge, with making changes to the code, recompiling them, restarting Rhino, reloading a test scene and see what happened with the code changes. Very time consuming and the recompile, start, load cycle breaks the node flow in my head easily.

With the Grasshopper plug-in I am able to iterate through the materials and see effects of changes directly. I can now properly work on making materials that work properly with the Rhino environment as well :slight_smile:

You can be certain that I’ll be posting videos where I try to work on the materials to get perfect results.

/Nathan

1 Like

Since SubD is still experimental I’m not surprised the ChangeQueue (motor driving interactivity for Cycles in Rhino) isn’t handling those yet :slight_smile:

/Nathan

1 Like

Yes, rendering on the GPU is much faster than on the CPU.

If you want to check the speed of Cycles on CPU so you can compare with Rhino Render, you can use the RhinoCycles_SelectDevice command. Give 0 as the device number (if you’re interested in trying other devices, check with RhinoCycles_ListDevices to see what is available. Don’t try the network device though, it does not work - but might in some future).

/Nathan

Thanks, Yes, I already did on the Mini render, as you can see there cycles is slower with 500 iterations than default Rhino Render settings. But AA is better and shadows are slightly less noisy :muscle:

1 Like

Small and simple speed test to test different systems: (25 cubes)

First results are from the old workstation’s two devices:
Quadro 4000: 32 sec
dual Xeon X5650: 25 sec

V6_RenderTest_25Cubes.3dm (674.3 KB)

Two minor things:

  • “Path tracing samle” allways shows 1 less than competed.
  • the text in the gray feedback area is blurry.

A nice, simple scene. I’d suggest a mixture of basic metal and basic glass material to increase the computational load. For fun I’d set one of the visible corner cubes to an emissive material (solid color, nothing else)

For background just solid bg color, no custom reflection env and no custom skylight env, but both can be otherwise enabled. A simple environment setup like that because I’m not happy with the current complete environment shader when custom reflection and skylight envs are enabled.

One more thing you could add is several lights, as each light increases computational load quite a bit. The emissive cube I suggested above already will increase that (for the shadow bounces).

Aye, both are on my todo list. Or actually, the blurry text one is on @andy his list. :slight_smile:

/Nathan

Great, if you like I can make one next week, or you can tweak the included file above to your liking.
Now I have to see if I can save my mac/windows machine that I tried to upgrade with a larger drive, but the software toasted the original drive instead… :expressionless: And I need it at work tomorrow as I am flying over to the customer… wish me a pleasant night… :smiley:

By the way, the GeForce 970 crunched that test in 12 seconds.

do you choose if your PC user its GPU or CPUs? or Cycles decides automatically?

By default GPU should be used, with CPU as fall-back, but it is possible for the user to select what render device to use.

At the moment explicitely selecting a device will have that device selected only for the duration of that Rhino modeling session. Once Rhino WIP is closed that setting is lost. At some point in the near future I’ll be adding some code that will make this a persistent setting.

Use RhinoCycles_ListDevices and RhinoCycles_SelectDevice - for the latter the indices are zero-based.

edit1: An ideal set-up would be two CUDA cards (GTX 960s for instance), and a (powerful) third OpenCL-capable graphics card. With that third card as the main display card the two CUDA cards can be dedicated to rendering whilst still maintaining very fluid user interaction overall.

edit2: I myself have three cards, two CUDA (a GT 420 and a GTX 760) and an R9 270x OpenCL card. The R9 is my main display device, and the GTX 760 is used for Cycles.

/Nathan

Thanks for the explanation Nathan.

The capitalist monster in me can’t help but think: “these multiple cards/desktop-only stuff seems like a small market.” Don’t judge me, I’ve been trained by my work mentors.

Any laptop with decent GPU will work too :slight_smile: I regularly test with the GT 420, already old and not high-end, it doesn’t have many CUDA cores, nor memory, but it still holds up really nicely. Obviously it is less snazzy than the higher-end cards, but it still gives a nice experience.

Note that my old macbook running windows with a GeForce 330M manages to start cycles, but crashes after a few iterations on more complex scenes. I presume you don’t support that old card, but maybe you chould add a check to see if the card has enough ram? As I presume that’s what causes the crash/hang

It is very likely that GPU RAM lacks indeed. There are some code changes in upstream Cycles (the ‘real’ Cycles :wink: ) that should make it easier for me to catch this kind of situations and ensure we don’t crash entire Rhino. And I could indeed add some preprocess step that estimates memory usage and refuse to actually start if it goes over some threshold for detected GPU RAM amount: RH-33650.

1 Like

Hi Nathan, All,

If I get a laptop with Thunderbolt 3, can I add an external graphics card as extra card? is there such a thing? this will be to work along the internal Quadro M5000M. Or is it overkill and not getting enough performance return on investment? Thanks!

To be honest I don’t know, but I’d be interested in hearing of experiences by other users from this.

If it is possible to hook up an extra card like that then in theory it should show up in the device listing. I don’t think bandwidth would be a big problem, but hard to say without experience :slight_smile:

/Nathan

It should work just fine.
Others have done it with other software.
I found the example with the most describing soundtrack here: