Rendering rig. Which video card(s)?

Double check this will work, but this is the idea. Will give you one more 8pin from the White 4 pin Molex.

@gustojunk, also check the height. The Asus cards seems to have a larger PCB size compared to the founders edition. I only find the measurements in cm:

Asus GeForce GTX 1080 Strix (OC Edition): 29,8cm (length), 13,0cm (height)
Standart Founders Edition: 27,0cm (length), 11,1cm (height).

btw. the 180W mentioned above applies to the founders edition, the power target of the overclocked STRIX can be much higher. It comes stock with 198W (limited in hw) and if you really unlock it, it may go up to 300W.

c.

THsi sill give me one more 8-pin connector (what I need) from TWO Molex, right? I only have ONE spare Molex:

Yeah the height it’s same 1.57" as the gap between each of those two slots 2 and 4:

Same, Same. I don’t believe you need to connect both.

This is 4 to 6&8

EDIT: I take that back. This is from a Mech. Eng.

This is from Tom’s Hardware:
BlacKHawK3 Feb 14, 2014, 8:30 AM
The 8-pin PCIe connector has 2 additional ground (neutral) pins vs the 6-pin. It does NOT have additional +12V wires. The additional grounds lower the total CONNECTOR resistance and allow more power transfer, but in reality a 8-pin does not have twice the capacity of a 6-pin (which the 150 watt vs. 75 watt rating would imply).

Two molex to one 8-pin is a better solution than 6-pin to 8-pin, since each molex can handle 2 ground circuits. The molex connector is rated for 8 amps per pin on standard 18 ga. wire, so 2 connectors (2x +12V and 4x ground, total) can handle more than 192 watts on the +12V and 384 watts on the ground circuits.

If the card has 2x 8-pin connectors, you MUST connect 2x 8-pin power sources. The PCIe slot and a single 8-pin are not rated to supply the max stock TDP power of the GTX 770.

In your case you need a 6 & 8 for each card. right?

SO… The only way I see you can make that PSU work (but this is getting connecty) is to first combine two SATA to 4pin Molex, and then combine that 4 pin Molex with the other 4pin to a Final 8pin.

Plus, the 8 and 2 sixes you have. = 2 (6 pin) and 2 (8 pin)

Hope you dont’ need the other Sata.

1 Like

There are Y-adapters to go from two sata ports to one molex. But i would suggest to not plug these into the same cable strain where you already have one molex.

c.

1 Like

You can get sata-> PCIE (the black long L-shaped ones)
I’m also pretty sure you can get PCIE-> PCIE splitters (giving you either 2x6pin or 6pin->8pin).

If that figure of 180w for the 1080 is correct then the 1x 12v rail is plenty even with only 1x rail per GPU. 18a x 12v works out to 216w max.

Its still worthwhile finding out where those 12vA -> 12vF go: they may be distributed in a way that makes it more difficult to get an empty 12v rail to each GPU… I have faith though!

EDIT: just saw; Elucidesign’s post: looks like more comprehensive info than I’m providing!

ok guys, I think we are getting close :slight_smile:

I cannot find 2 SATA male > Molex Female (which is what I need), only the opposite is available.

What about this? I can get two separate SATA male to Molex 4-pin female, get rid of one of the two molex connectors and merge them into one. Then I have the two Molex I need to go into 8-pin. Like this:

I have no idea how to trace this stuff. So I’ll also go with faith. Or could I use a multimeter to see how much power runs on each of the cables?

Thanks!

Sounds like we’re there…

Might be able to convert only one sata to molex and combine one newly converted molex and your oem molex into the 8pin?

Are the sata cables (& the molex) all in series (or any in series?) if so im not sure we gain anything by combining the in-series sata; only adding complexity & resistance?

Does anyone have a 2xMolex to 8pin pcie?
If I remember correctly not all 8 pins in the pcie plug are present. (Further reinforces the above)

Final thoughts; buy more adapters than you need and you can either get the iron out and solder your own to suit or combine the wires at the molex pins(can usually peel them open and splice 2 wires back in) & have spares if it doesn’t work out the first try. I was going to suggest this earlier but wasn’t sure if you had the appetite; your diagram above suggests you do! :slight_smile:

Lastly,
Do you have two gpu’s handy that draws at least as much as the 1080 and have the same pcie power plug requirements? Might be able to buy only the adapters and test before forking out for the 1080’s

Okay. I’m going to try to put this to bed.

Unless you have 2 Sata Connectors that are (not) on the same run of wire (coming out of PSU in Parallel, not in Series) and can convert them to 8pin, by any means necessary, then you should stop trying to make the EF-1100-FF work. (Also known as R622G 0R622G CN-0R622G).

AND.

You should remove your hard-drives, or run on a single TB SSD and mount it with velcro elsewhere in the case. OR. Buy 5.25 Bay converters to house a SSD and maybe a Spinning drive up front (who needs 3 DVD drives anymore)

Measure the vertical height of your R622G and compare it to the tech specs of your 1200 options. For example: EVGA supernova 1200w : 85mm Ht. 150mm wide, 200mm long. Is the face of your PSU 85mm x 150mm. What is the size or your R622G?

Buy a 1200 or 1300 Watt MODULAR PSU, I like Corsair , EVGA. Then you will have all the connectors you need.

The “Modular” connectors allow you swap out connector cables at will. The PCI ones often terminate in 1 8pin and 1 6pin by default, so one cable per GPU.

Here is the Service manual for your machine where I inspected your case possibilities.

Get a new Modular PSU, you won’t be disappointed, but keep the packaging, return if necessary.

I’m way less confident that all that adapting and connecting won’t just be a PITA, and perhaps still not give the Amps per 12v you need, since it would come from various, potentially ‘in series’ connectors.

1 Like

Totally agree:

Consolidating your hard drives to 1x SSD & removing all optical drives might free up a sata too: so prob best to do that first incase it free’s up the power plug you want/need.

you guys totally rock! I will look at all what I need thsi weekend and order the necessary parts. BTW< I’m slowing down a bit in terms of video card. I’ll use non-overclocked and something that requires less power just in case. And also one that can push air outside the case: https://www.asus.com/us/Graphics-Cards/TURBO-GTX1080-8G/

1 Like

Here’s something out of left field to think about… You already have 24 possible threads of 3.73 GHz compute power… that’s already an insane amount of rendering capability! Before you spend a bunch of money, and possibly bottleneck your rig, why not find a renderer that has the capability to do both CPU and GPU rendering? That way you’re not locking yourself into anyone type or product for rendering. You might find, and from experience I’m pretty sure you will, that the CPU rendering on a machine like that will be faster than just GPU. GPU rendering is great on consumer grade computers but a workstation like yours it might be a downgrade. It would be like taking the 454 out of your muscle car to put in a turbo V6. Might be better but probably not.

From my experience so far–and at my old job I had a dual-10-core Xeon to play with–no, 24 threads of CPU power won’t be faster than GPU rendering, not with any half-decent GPUs. On iRay if I enable CPU on a hex core i7 with 2 nVida 970s it actually slows down. On Neon, the discontinued Caustic board made a hex-core i7 faster than the dual Xeons(too bad for some bizarre reason the Caustic board actually worked slower on the Xeon system, it could have been awesome.)

About the only way CPU rendering can match GPU is say, comparing iRay to Brazil, the GPU renderers tend to be very “physically-based,” intended to give you realistic results with little fiddling with settings. Brazil gives you a thousand settings for interfering at any step of the rendering process to get the exact result you want and cut down on render time. If you’re enough of a wizard with those options, you might possibly be able to set things up to be faster at an acceptable level of quality, for certain kinds of scenes.

My old rig was an i7-3770, nice, fast, served me well for a very long time. Keyshot renders usually took overnight to get rid of all the noise. I added a W7100 to that and well with Keyshot being strictly CPU based there was no improvement. Trying a number other renderers that used GPU I saw a huge improvement in speed over the i7. My new rig is a single Xeon 14core/28thread and it’s no comparison. Renders are done in minutes instead of hours. There’s no substitute for cubic inches so to speak.

Impatiently waiting for more renderers that use both CPU and GPU to come along so I can make use of the W7100 too. Rendered in seconds possibly??

BTW… anyone know if Cycles uses CPU and GPU for rendering?

in theory yes, soon some patch gets merged that should improve stability for that

https://mcneel.myjetbrains.com/youtrack/issue/RH-37680

1 Like

interesting perspective, and it might be true for some people’s workflow and output expectations. Not for ours.

We are happy with that way of working and Keyshot with lots of cores works quite well for most designers. It’s a proven workflow: easy to learn, relatively fast on the viewport preview , intuitive, and you go from import to rendered image faster than in any other system. Their default scenes, settings, materials are done for designers, not nerds. I could not say the same about any other solution inside Rhino, all painful, all unresolved, unintuitive and all stuck in the past in my opinion.

So CPU + KeyShot is the way to go for most of my old team. And it’ll stay that way for a while.

…I’m switching gears and besides keep working with them that way, I have now other projects, other clients, and tasks that require rendering for interactive decision making. So waiting for a rendering is not an option, and physically accurate results is a requirement. NO CPU solution can provide that, and based on my own tests Octane with powerful hardware can, at least for product design visualization. Also from what I have seen others do (no my own tests yet) Vray RT might come really close too. I need to follow up with @crubadue on that front.

Just for a bit pf perspective, take a look at the realtime behavior of these scenes by Octane user (and hardware builder) Smicha: https://www.youtube.com/watch?v=GeeZa2Y1GBs. That’s with 7 GTX 980 Ti’s. I want to build something with 4 1080s, it will be a bit slower, but still quite responsive. We’ll see.

2 Likes

UPDATE: after considering all the feedback and rethinking my needs, I decided that trying to reuse thsi system could become a hassle, and that 2 GPUs might not be enough anyway to have a good experience.so I decided to go a different route: New system build. 4 GPU’s. I’ll start a fresh now thread if you guys want to help. It’s your fault after all that I went that way :slight_smile:

Thanks!

G

1 Like

@gustojunk Did you get my letter? :slight_smile:

I’d be so brave as to benchmark with you all use cases that you mention.

V-Ray 3.4 is coming…

yeah you made me laugh out loud. let’s play and see what we learn.

G