SLI 2x RTX 3090 and Rhino? NVLINK

Hello, Has anybody tried to run Rhino using two 3090 in SLI NVLINK?

Hi Alan,

What would you expect to gain using SLI? We have some people in our team running dual 3090s but it never occurred to me to add the SLI bridge. We tried it once for Unreal and it didn’t do anything at all. I don’t think Rhino would take any advantage of it either. SLI seems to me is mostly snake oil, has anyone seen are real world performance gains with it?

G

48Gb ram. I’m struggling with GPU ram. My scene looks like collapsing and Rhino close. I’m using all my GPU. Lots of textures and quality I do not want what to give up.

SLI is dead, and it doesn’t let video cards share VRAM. NVLink does, but the renderer has to support it, and while it’s not dead like SLI it’s in decline.

Are you sure you’re actually running out of VRAM? That’s kind of crazy, like it seems like most people don’t run into this and just don’t bother with NVlink. On iray with multiple GPUs I once had this crashing problem that was actually due to not having enough Windows Virtual Memory–not physical RAM!–allocated in case it needed to cache all that VRAM–or maybe it was the allocation for the RAM space that programs talk to instead of dealing directly with the VRAM, whatever, something of that sort.

SLI is unsupported for Rhino.

I run multiple GPUs and my experience with NVlink is more like doubling the VRAM, your performance won’t scale in a linear fashion just in case you got lucky since 99.99% of the applications don’t support it.
If 24GB of Vram is not enough for your files, I’d just upgrade to an RTX A6000 or the new RTX 6000 ADA. so, you can get 48GB of a single GPU.

1 Like

And Cycles?

That’s not how it works AFAIK. You have a max memory equally to the single smallest card of all your connected cards.but I hope I’m wrong and then I need to find my NVlink in my junk drawers!

We do run out of memory with 24GB sometimes. But in all those cases the scenes would already be completely unusable in Rhino way before we hit render. Another reason we render in Octane for Blender. Octane can use ā€˜out of core’ memory when you run out of Vram, but it’s not a miracle, if you have too much memory debt it will fail.

G

And NVLink capability?

RTX 3000 is not SLI, as the title claims, where you have a max memory equal to the single smallest card. Only the NVIDIA RTX 3090 has NVLink capability(gaming card). Presume the fps will drop a bit, since Rhino is not compatible. But do not know.

I’m using insert block instance to manage and keep fps down (just interior).

Blender, Cycles render faster.
So 48Gb of 3090 can work with Rh7 Cycles?

They will work in NVlink as a single GPU of 48GB
But
They will operate way slower than keeping the 2 GPUs as separate 24GB each.
@nathanletwory can you confirm or deny my argument?

1 Like

If you haven’t bought the 3090s, you might check out the 4080 and 4090, perhaps the performance will be comparable, with lower power.

Though, you have to make sure that you have enough memory on the video card for your project.

This is just preliminary, but a 4080 will trounce a 3090, likely for less money, and lower power.

I had hoped that the 3080 and 3090 price would have dropped to something that makes sense, but I don’t see that.

Power, she says… Yes, power I say, actually, nVidia says, ā€œa RTX 4090 takes 400 watts!ā€ You need at least a 840 watt power supply.

https://nvidia.custhelp.com/app/answers/detail/a_id/5396/~/geforce-rtx-40-series-%26-power-specifications

This is just a personal thing, but I like blower cards, such as FE (Founders Editions) as they don’t spill a lot of hot air inside the computer case. Also, they may work better if there is more than one card in the system. Noting that on Blower cards, there is often a bit lower performance as there is only one fan.

Cooling in PC cases needs a rethink.

1 Like

AFAIK none of the linking technologies is supported. Probably will result in crashes based on old reports.

But tbh I don’t know. I have never had opportunity to test or work with these technologies.

2 Likes

The only thing I found was this.

You can see, dual GPUs are 31% faster (in BMW test) than a single RTX 3090. 80% Faster in Classroom and literally same performance as a single GPU in the final test.
as @Brenda said, If you haven’t bought the 3090s. I’d just get a single 4090 and you won’t have headache at all.
Otherwise, If you have the cards with you, you can split them and have one to power the viewport while the other would render.

3 Likes

Hello Tay,

Looks very handy!

When rendering and the rendering window is open, Rhino is frozen, and using the viewport is impossible. Do I need two cards to achieve that? For example, can I use my 3060 and 3090? How to config that inside Rhino?

@AlanMattano if you are using cycles. You can go to options > Rhino render > and select your GPU devices that you want to use for rendering. Keep the main one that is connected to your monitors are your viewport only GPU.
Other renderers like V-ray also will let you pick and choose your desired GPU for rendering.

This is ideal for realtime / interactive rendering when you need to play a lot with lights and materials to achieve a specific look.

Specifically I’d use your 3060 as you main GPU. Connect it to the monitors. And keep the 3090 as your rendering GPU. Now this is an issue if you do rendering 1% only. You are not making most of your 3090. Another way is to selectively assign a default openGL GPU inside Nvidia control panel to make sure rhino can use your 3090 as your openGL device as well as rendering (Cuda /Optix) device

1 Like

I think this ā€˜splitting’ advice made more sense a few video card generations ago when you had less VRAM to work with, or if you have an AMD card that just can’t do CUDA rendering. Just plug your monitor(s)into the 3090 and make sure cycles is set to use all available cards. I used to have 4 1080ti’s in one machine–sold 3 to upgrade the rest of my system, what a wacky time that was–I never saw any reason to set one as as a ā€œdisplay-onlyā€ card, the CUDA cores are a separate resource from all the traditional display stuff, never saw any indication it was impinging on the rendering–and even it was, it wouldn’t be enough to justify leaving those cores sitting idle!

1 Like