Some general multi GPU questions


I try to setup 2x2080ti and 1x1080ti. I have a DualXeon with a Z9PE-D8 WS mainboard. At PCIe slot 1 is the 1080 placed (per extension cable), at slot 3 and 7 the 2080. The monitor is connected to the 1080 at slot 1.

  • It’s seem to be the monitor needs to be connected to the first used slot/card. Is it right? I tried to connect the monitor to the second card, but the PC doesn’t boot anymore.

  • Strange effect - Rhino and GPU-Z show me that a 2080 is used, but the monitor is connected to the 1080. Can this be? Goe’s the monitor signal from the 2080 through the 1080? Is this ok or will it cost GPU power?

Slot 1/3/5/7 are x16 slots, the other slots are x8 only. Could I use the x8 slots too without to lost to much GPU power? I would like to move the 2080 to 2 and 4 in NVlink connection plus monitor use and to place the 1080 at the end at slot 7.
The slot 1 couldn’t be used for the 2080, because slot 1 should be used per extension cable only, because the CPU cooler would be warmed by a card at slot 1.

I don’t know so much about graphic card setups and I would be glad if someone could help me to find the best setup.


If the RTX cards are supposed to do rendering I’d connect the monitor to the 1080 and ensure in nvidia settings it is used as the OGL main card as well. That’d leave the RTX cards free for rendering, which would help at least with Raytraced, but probably also with other GPU-accellerated engines.

Not sure about NVLink and Raytraced though - I have no experience with that.

  1. Not really, no? I mean if that’s what you’re seeing, sure, but the question would be why
  2. That’s possible, that’s how laptops work with external GPUs, I saw a LinisTechTips video doing that exact thing to use an old “mining” card with no display facilities for gaming, but unless you’ve specifically set it up that’s probably not what’s actually happening.

The performance difference between X8 and x16 isn’t a big deal, no. The question is more–and I’m not even sure how much difference this really makes–does your system provide enough PCI lanes to actually provide that full performance. I guess with Dual Xeons the answer is probably but you may have to watch which slots are used?

The NVlink is I think irrelevant to CUDA rendering? It’s for SLI gaming and keeping driving giant walls of screens. Of course you can just try it out and disable it if it doesn’t help.

V-Ray allow to use NVlink and so I could use 22GB memory for rendering. (The bridge is ordered and on the way yet, so I can’t test it.) For very large scenes I could use the 1080 for the system and 22GB of the 2080 for rendering.

On the other side Enscape will benefit by the RTX feature, but Enscape works for the system card only. My idea is to use the linked 2080 for the monitor too. So, I hope to get the full 11GB of the 1080ti for V-Ray rendering without to lost any memory for system and other programs. And if I use all three cards for rendering than the smallest card is the limit and the limit could be this 11GB. The rest of the 22GB could be used for system and other. I’m curious what scenes can be rendered with full 11GB. If it isn’t enough, than I could switch the monitor to the 1080.

I will post how it works and what I find out by some more tests.

Thank your for your thoughts.

And finally I get the best render times at your way. And setting the OGL at the Nvida settings focused Rhino to the 1080ti - very good useful hint.

Since NVlink cause around 10% longer render times I decide to disable it for daily use. If I should need more GPU memory than I will enable it again and render without the 1080ti. At the moment I can render a complex train interior with 30.000.000 trinangulated polygons, so I suppose so the NVlink isn’t needed often.

Thank you,

1 Like