An nVidia RTX 4060, 4070, 4080, 4090, A6000, A3000, Others Chart

well sadly good gpu deals isn’t really a thing these days.

since the chip shortage, all manufacturers have started planning new fabs, not just in asia but usa and even europe. i think eventually we’ll get good prices again but this will take a few years.

1 Like

Well, I bought a RTX 4070, and not a TI. With a 9.63x improvement, I hope it will last me 3 years. Memory will be tight for large renders.

[Let’s see, my GTX 1080 8GB will do a 7500x7500 render, with a light-medium texture load. I am hoping (hope as a plan) that with 4GB more at 12GB, the RTX 4070 it will do 8K (7680 × 4320) 16x9 renders with a medium/heavy texture load. I cannot expect anything beyond that.

I…(scuffs feet) suppose it could also be used for video games.

1 Like

Resizable Bar / SAM are new to me. I wonder Cycles/Rhino-render performance could be helped if it was supported.

I could be wrong here, but ResizeBAR is a motherboard-side managed system. By default, Cycles should use it anyway(?). Regardless, does it even matter for single frames? Is it even of the same order of magnitude in time for a single frame as something like shader compilation?

please correct the chart. 3080Ti CUDA cores are incorrect! and of course the prices, 3080ti is cheaper than a 3080? 3080 can be found for 400euros and 3090ti for 800euro!

I corrected the core count for the 3080ti, thanks for noticing that.

As for prices, these are not representative. These are just prices I could find here locally. I checked some of the prices again and updated them for the 3080 and 3090ti.

But yeah, this list is not meant as some sort of reference for others. I showed it here as an example how you can make your own list and what you might want to look out for.

1 Like

The RTX 4070 came in. For gaming it works really well, but for rendering, the speed jump should have been 9x from my old card, and I am seeing about 6x. It’s daunting that–even by throwing another $1000 for the 4090, would only give me a 2x increase.

BTW, this is very interesting: activating Optix appears to lower the maximum power the card will use. With Optix: usually less than 100 watts. Without Optix: usually up to 143 watts. Call me cynical, but it appears that nVidia hobbled the Optix on non Quadro cards, as-in, I am angry.

Although there are faster processors, the render does not seem CPU-bound. This motherboard has 4x PCIe on its slots.

Also, noting: I needed to put the card on Maximum Performance in the driver, for it to go over 100 watts.

Some good news: the card will do a 8,192 x 8,192 render, with at lowest 2.6GB remaining, using a low’ish texture load. I suspect that it should be able to handle standard 16:9 8k renders (7680 × 4320) with a fairly high texture load. So, I am relieved by that. Oddly, the memory used, changes by quite a bit, varing from 4.5GB-2.5GB, as it renders, for some reason.

For Windows, here’s a link to OpenHardwareMonitor. These are the items I selected for the widget, which needs to be done by the user.
https://openhardwaremonitor.org/

I added a custom setting for Rhino, selecting “Maximum Performance.” For Rhino, I usually use appplication controlled for most settings, as I don’t usually need aniosotropic filtering, and the antialiasing is usually set in Rhino, to low, as I have a 4k monitor.

I don’t think nvidia has restricted the rt cores on the rtx cards compared to the A-series (professional) cards. optix renders drawing less power than cuda renders is expected. there is way more transistor area on the gpu die that contains cuda cores since this is used for classic rasterization work. rt cores just make up a small portion of the die. since they are perfectly optimized for doing ray calculations you need way less of them but still end up rendering faster compared to using all the cuda cores. compared to using rt-cores, cuda cores are more like the brute force way for getting a rendering done. very few rt-cores rendering faster than a lot of cuda cores while still drawing less power is normal. nothing points to anything hobbled here.

I’d love to see nvidia making an accelerator with just rt cores - but the market (people doing renderings) just isn’t big enough to justify a product like that.

yes, when using the gpu and especially optix, the rendering shouldn’t care all that much about how fast your cpu is.

what do you mean by “4x PCIe on its slots”? you should make sure the card is in the upper slot, then all 16 lanes directly to the cpu are utilized. if you add another card in a specific lower slot, they both share the 16 lanes and each card drops to x8. the rest of the slots is usually routed through the chipset and should not be used for important stuff like a gpu. though even if you restrict the bandwidth to just 4 lanes, this should not have a big effect on rendering. however, gaming would suffer a lot with a 4080 connected via just 4 lanes.

In the past, nVidia has hobbled the compute on GeForce cards. I have no reason to trust any longer. When doing a 4k image, I will run times for both Optix and Cuda.

Also, I am still using the non Beta version of Rhino. The new one is supposed to be faster. I am looking forward the the end of disk-caching materials. I am hoping that realtime rendering starts faster.

PCI Express have different version/classes. Sorry, it should have been 4.0 instead of 4x.

Which card slot is used may vary for board to board. Mine is in the correct slot. Though, it’s an “SLI” capable AMD board, so while the lane would be divided, I could another card, someday, which is why I am pleased that an RTX 4070 at 12GB seems to have enough memory.

I noticed that rendering pauses about once a minute, it really hits the system “multi-tasking” really hard. Even a video will stutter.

As a sanity check, have you tried one of the heavier Blender benchmarks? It would enable you to confirm if the hardware is hitting bottlenecks compared to how it should be expected.

1 Like

That’s a good idea. I will check as soon as the other render is done.

you want to run every rendering twice, one optix, one cuda? why, I don’t get it.

non-beta? do you mean you are using rhino 7? if so, you need to know that the normal render method with the RenderWindow is bugged. use ViewCaptureToFile instead, it is about twice as fast. I just recently compared the two ways again to see if that bug has been fixed but it isn’t. IMO users should be warned about this in a dedicated post pinned on top of the rendering category here in the forum. I’m pretty sure many people use render without knowing that this takes double the time compared to ViewCaptureToFile.

I recently installed rh8wip and did a comparison - luckily it is fixed there, normal render and ViewCaptureToFile were pretty much equal. I only tested optix and it is also overall noticeably faster compared to rh7. that’s why I use rh7 for designing and rh8wip for rendering.

on almost all consumer boards, the upper slot is the one wired fully x16 to the cpu and intended for the gpu. that’s because the signal integrity degrades with more distance from the cpu socket. for pcie 4.0 and especially 5.0 you usually need to add expensive redriver and retimer chips for signal integrity (if you would want to use a slot futher down).

regarding your motherboard, if you haven’t already, make sure to upgrade the uefi to the latest version. that series had quite some issues which were resolved some time ago. also, just recently, a new vulnerability on intel and similar on amd was made public. this was addressed silently by microsoft on the os level earlier but fixes for the motherboard uefi (updates the microcode in the cpu) have recently been released as well. my asus already released the new version, msi should follow soon.

that’s strange. not sure though if this is related to multi-tasking issues.

Running render times would compare them.

Generally, I refer to the motherboard manual for such things as slots, memory, and things like that. For instance: there is no upper slot in my computer–because it’s in a rackmount case : P

[The chipset could be moved, situated with slots branching out in two directions. In that way, the slots on either side of the chipset would run just as fast. In other words: you would have an equal number of slots on either side of the chipset.]

I’ve had issues with the latest BIOS/UEFI firmware for my MB. Right now, I am running what came on it, but I will try climbing through them, at some point. You welcome to make suggestions, but sending a MB back, is not my idea of a good time–especially to Asus, which is why I will never buy another of their motherboards. This MB/RAM/CPU is middle aged: MSI MEG ACE 570x, 3900x, 64MB, and it’s probably out warranty, now.

I am going to do a clean install, but because I changed 2 pieces or hardware (NVMe / Video Card), I wanted to do them separately, so that there isn’t an Activation hardware binding issue.

[Microsoft Windows has never been a good multi-tasking operating system. When I was young, I used Microware OS9 (RTOS) on my trusty CoCo. As long as you had a good disk controller–it was impossible to make it stutter, no matter what you did. You had no hangs, and application switching, no matter what at 60hz intervals.]

atx boards have a north orientation. that does not change whatever enclosure you use the board in. so yes, there alway is an “upper” or most north slot.

I have no idea what you are talking about.

what do you mean by “climbing through them”? i never suggested you should send back your board. where would you get that idea from?

this is recommended after installing this many new hardware. a fresh install is also a good opportunity to update the uefi again. all the uefis for amd 500 series mainboards should work without any issues since quite some time now. be aware to also use the latest chipset drivers.

I’m sorry, but you are not going to sell me on not consulting the motherboard manual.

Two PCIe slots could have equal speed, if the distance between them is the same, all other things being equal.

Any time you flash a motherboard CMOS/UEFI, there is some risk. Even on a motherboard with a backup firmware, there is a risk of briking it.

[My old MB had two sets of UEFI/CMOS. One had been corrupted in a power fluctuation that got through a Belkin surge suppressor and a Corsair HX 1050 power supply. The backup firmware worked. After the issue was fixed, I sold the 8-year old BM and Sandy bridge CPU, and memory for about $100 after expenses. It’s probably still going, wherever it is.]

As I stated, after the latest firmware was found to be buggy, I downgraded to the the original that I was running. There are others, which may be good to go through. I will likely pick one just past when the Base-Area-video-Address memory area was introduced.

As for my machine, because I also did a NVMe swap, I wanted to change the hardware one piece at a time, to avoid any Windows Activation issue. The machine is running fine, well as fine as Windows 11 can. Windows 11 is not as reliable as Windows 10, IMO, nor is it as good, nor does it have any feature over Windows 10 I care about.

Anecdotally I can say that I just bought an NVidia RTX 4070, and now I can manipulate detailed RhinoCAM simulations without any issues. No freezing or lagging. I previously had a Quadro P2200, which really struggled.

I’d be curious to see how hard you have to push GPUs in Rhino before they are a limitation. I’m sure someone out there can max out these cards, but my initial impression is that the 4070 is more GPU than I actually need for Rhino/ RhinoCAM.

5 Likes

Hello everyone! I am new here. Wanted to know your opinions. I have two options right now for used market: the RTX 3090 for 511eur and the RTX 3080 10gb for 350eur. I wanted to know, for the 3D rendering (short animations), would the VRAM difference justify the price difference? I am only getting into 3d animating, but since those options for buying have appeared now, I need to decide now. Also, there is an option to buy both and sell one for higher price :slight_smile:
P.S. my sibling is traveling to my home country and prices there are cheaper there…

I would buy as much VRAM as you can get. But I don’t know how much you will use, so I can’t tell you if the cheaper one is enough.

Are these cards coming from bitcoin miners? If so, they may may not be as good of a value as they seem. Just thinking out loud…

Hey, thank you for your reply!

There is no information about mining and they will probably lie even if they were mined. But one thing is that 3090 is published as “perfect” condition and from the photos it does look like it and it has all of the seals on the card untouched while 3080 is “good” condition and from the photos it is somewhat dusty with broken seals.

About usage; I really hate being limited or restricted by hardware in any form. I think it is one of the most important thing about PC build for me. So even though I am not too sure about the complexity of the projects, I do not want the VRAM to influence my workflow…

Also to make it clear, I am not really in need to upgrade my card, it’s just that I got the opportunity… Coming from it, I was thinking maybe to wait for the prices of 40 series to drop… I am not too sure. What’s your take on it?
I doubt the prices will drop, and even if they do it will happen only with release of 50 series which is nowhere near (late 2024-2025).