Danish Mood by Charles Nandeya Ehouman converted to Rhino

As part of creating test material I started converting scenes to Rhino. I started with Danish Mood by Charles Nandeya Ehouman.

The original Blender file I downloaded from Example Scenes – LuxCoreRender . Apart from the original leather texture (that I did some editing on using GIMP 2.10) and the framed picture by Sam Willis (https://www.pexels.com/photo/low-angle-photo-of-airplane-1154619/), the materials are all from Rhino materials library, or created in Rhino.

To get the model into Rhino I exported the scene to the OBJ format, then imported that into Rhino. I had to fix up materials quite a bit. I made some minor corrections to the model as well (light fixtures were a bit off).

This scene I used to test my fix for multi-GPU rendering. Below you can see that I used an RTX A6000 and an RTX 5000. That is 1500 samples in 2 minutes and 8 seconds. The same scene with just the RTX A5000 would take around 3,5 minutes. This fix will be in Rhino 7.7.


Great work
I need that fix… i mean what to fix to get these two cards? :slight_smile:

P.S. No Blender import in Rhino?

It’ll be in Rhino 7.7.


and no plans for Blender importer? please

Not at this time.

Photorealistic~ :heart:

Great! Thank for your dedicate

USD file format?

You can already use FBX or OBJ. I used OBJ to export from Blender into Rhino.

There is no USD importer in Rhino at this moment.

1 Like

This is my version… work in progress… (Vray 5_ Rhino7). :wink:
Any advice?


Yes. Make your bed. Seeing this, Jordan Peterson would go mad. :wink:

But the rendering is fantastic.

// Rolf

1 Like

Rolf, thanks, i’m lazy, I prefer a healthy disorder to a fake order! :+1:
We would have to define the whole frame (from where the light enters) … that part is only sketched … everything else seems good to me.

The composition of the scene is mine, but the credit goes to Vray. You can’t ask for better (even if it’s not the only great engine, there are many others …).

1 Like

I did just now a rerender of this scene using Rhino Render with the multi-gpu using OptiX and that trims the rendering time down to 1m22s from 2m8s. That is only 64% the time it took multi-gpu CUDA.

For comparison, I rendered this scene earlier today on CPU with all cores selected (CPUx8), and rendering at resolution 800x1297 it took a whopping 1 hour, 50 minutes and 2 seconds…


fwiw: I wanted to know what V-ray would be capable of in 1min22s, on my relatively old hardware, so I quickly threw this scene together and this I think shows that with the right software, CPU rendering is still up to its task:

credits photo in frame: @nathanletwory :slight_smile:


Nice :slight_smile:

72 threads! I had only 8 (:

btw @Gijs how many samples did you use here? And any denoising used here?

I rerendered with some more stats. It’s using the intel denoiser which is very fast. It kicks in after 4 samples/pixel (27 seconds) which gives already a usable image:

and reaching 24 samples/pixel after around 1m20:


Denoising is God sendt!

@nathanletwory that is one thing I have wanted to follow up, can you please make the denoiser kick in earler in the rendering? If I could I would have it on for every pass shown.

The denoisers in Rhino are all post-effects outside of my rendering. You’ll have to poke @DavidEranen, @andy and @johnc about that. FWIW, running the denoiser on every pass will slow down rendering quite a bit. Right now I believe denoising, if enabled, kicks in at about 5 seconds into the rendering and then once every second.

FWIW2 I tend to turn off all post effects, since they take up processing power. Even gamma, and I would also turn off tonemapping if possible at all. That will speed up rendering also quite a bit. I enable them once the rendering is done.

OK, but how much can it slow it down when games have them in the pipeline and do it 60 times a second on top of everything else?

To me getting a god impression as fast as possible is more important than shavig off a minute on the final render. But I don’t do animations though.

exactly that. I always need many test renders before the final result, this denoising is a huge time saver to more quickly evaluate the result closer to what the render is converging to. Compared to the time per pass there is hardly any penalty.

1 Like