An nVidia RTX 4060, 4070, 4080, 4090, A6000, A3000, Others Chart

Further up I had written about vram before. In short, you may not ever need that much vram and might be fine with just 8gb, depending on the type of models you work with. I have the old 3080 with 10gb and currently I render a lot. it is simple product design renderings and I never come close to using that 10gb. I bought it on release date and back then the 3080 was the right choice over the 3090 in terms of value.

how do your typical renderscenes looke like? if it’s architecture or interiour design with lots of high res textures, then the more vram would indeed make more sesne. just product renderings and the extra money for the 24gb/3090 might better be invested elsewhere.

I recently upgraded to a 4070 w/ 12GB vram. I need it for doing CAM simulations, and I’ve used pretty much all of it already. I previously had 5GB vram, which was causing me problems. I couldn’t view the simulations do to lagging display. It was really bad!

With more vram, the simulations spin like a top when I’m inspecting them. Night and day difference.

1 Like

TL and can’t read it all… Quadro RTX 5000 vs A4000. If same price which one is better for standard Rhino modeling, cycles and vray rendering, CFD studies, CAM simulations.
Will also be using for zbrush, solidworks, onshape, geomagic, scanning and meshing software, etc etc etc.

Which RTX 5000? The old Turing? Or the new Ada Lovelace.

The new RTX 5000 Ada is way faster than A4000
Old RTX 5000 is inferior to A4000

I also have a 4070. My diner hits Rhino pretty hard. In V7 large texture handling is a thing. I believe and hope that it’s being addressed in V8.

It’s been about a month since I ran the used prices, but I have not seen great deals with performance per Dollar/Quatlu, but I have seen good deals with cards with more memory, which might be a figure of merit for large jobs.

Though, the extra memory is only useful if you need it, such as a high texture load.

[Most current video games use an astounding amount of video memory.]

If anything, the value proposition for some cards may get worse, following the ban of some card classes from nvidia.

Which ban? You mean the export ban to china? Wouldn’t that make more stock available for the rest of the world? I though that ban affects only some of the latest high end “super chips”?

Unlikely. The usual route for nvidia is to throw thier toys out of the pram and proportionally reduce production. They did this earlier this year when both thr 4080 and 4060 Ti, which are dreadful value propositions, weren’t selling.

This is a business which now exists solely to trap people into thier almost monolithic semi-monopoly. GeForce is also less interesting to them, since they have such high demand for Hopper CP-GPUs, so we can all make images of cats on skateboards or make terrible game dialogues on the fly.

It may not affect lower series, but thier upper series may get more expensive. Unless they do fill in the gap between the 4090 and 4080 with the rumored 4080 Super, which is also likely to be awful value.

1 Like

Yep, sad truth for the consumer, though I don’t really hate on nvidia for what they are doing. They simply have the best product on the market and that’s why they can set the price. No matter how much they have raised the price in recent years, consumers would always keep buying.

I saw AMD on a good path with their RDNA2 aka 6000 series but then they botched RDNA3. I bet when nvidia will transition to a multichip architecture, they will nail it on first try.

Let’s just hope AMD does not shut down their GPU division entirely. I think they just announced some big layoffs in their GPU departement.

It would help if software devs would start to support AMD cards. For rendering the only choice is NVidia.

1 Like

Let’s hope they don’t lay off people in their driver department… The hardware is great and has a huge potential.


Yes, exactly! The raw compute power on silicon has always been there. Was on par, if not better compared to nvidia.

But until recently AMD just didn’t invested into the software ecosystem. Even now, I think it isn’t enough. To close the gap to nvidia it’s not enough to just invest a bit more than before. Since recently AMDs GPUs can somehow emulate CUDA and you can use them for rendering in blender/cycles now. But who cares? People are using optix for this since a long time now.

It is a little bit of a mutual problem here. AMD somewhat chose a couple of years ago to merely match nvidia cost-for-cost, except the very top line (where there are few customers). In reality, nvidia seems to claim the best API, and that’s important for developers.

For large game studios, this is not so bad, as they can afford to have developers support all three APIs on offer out there (CUDA/OptiX, HIP, oneAPI/Embree; for nvidia, AMD, and Intel, respectively), and so they all get adopted. But smaller studios cannot do this, and your choice is reduced to two only; AMD and nvidia. Of the two, nvidia will always currently win.

In production, the problem is merely compounded. Outside of Autodesk and Dassault, and Siemans, its probably very hard as a developer of CAD/CAM software to invest the time in even more developers to spread your development across 3 APIs.

Even Autodesk Arnold, which is probably the more obvious of the huge companies, only supports nvidia GPUs for rendering compute, as far as I see. The others don’t really render as such (or it’s just OpenGL type).

Even Blender has had a hard time supporting much more than nvidia recently, and it was only with Blender Cycles 3.3+ that they really got into all three GPU vendor APIs, so you can use any card you like, I think. Throw in Apple, and the problem compounds further.

This then becomes a negative feedback loop, in that the dominant force in nvidia, so production software with rendering capability targets nvidia, and if you are lucky, AMD at a push. So then people have no choice if they want fast (at expense of CPU “correctness”) rendering but to buy nvidia GPUs, which merely starts the reinforced cycle again. Developers of many production softwares simply cannot afford to hire more people to develop outside of CUDA or whatever Apple uses any given year.

Neither AMD or Intel can yet reach nvidia GPU capability or sheer efficiency with either of their GPU architectures; and I’m not entirely sure AMD really cares about doing so.

Regardless, I hope someone can do something, either about the lock in of APIs (maybe this is better for developers), or there is a natural diversification of GPU market (maybe bad for developers, but good for consumers).

I took a risk when I built my system and got an Intel GPU. It’s very good for a first generation. But I won’t lie to myself and say it’s anywhere near anything beyond a 4060 (Ti), with a market of less than a few percent, and a newish API. It will render in Blender and Chaos Vantage, but thats it. Utterly useless for probably 95%+ of production render users. Of course I am tempted to suck it up and get an nvidia GPU, and become part of the problem. But AMD and Intel both need to do much better, and probably work with developers to break this semi-monopoly.

1 Like

AMD has been playing catch-up with Nvidia for a long time now. It seems that whenever AMD finally catches up and develops a response to a new Nvidia technology, Nvidia releases some new killer feature and AMD is left in the dust all over again.

1 Like

Thank you for this info! I landed up getting the A4000, had a chance to try and benchmarked the old RTX5000 as well. It wasn’t loads slower but was enough to justify a few more hundred dollars, saving me a minute per render.
And the A4000 allows less use of my PSU budget as well which is always nice.


Congratulations on your purchase, A4000 is a solid option for Rhino!

1 Like

Back to this topic. It seems that most stock of 4090s in the US vanished, with various theoretical reasons.

They can be found, but I think the price has a base cost of $2000 for a 4090. A strix is now $2800. We had a slight delay in Europe, presumably due to the delay in purchasing agreements, but I noticed in EU/UK the prices have started rocketing.

The baseline here is now somewhere about 1750-1800 Euros.

The other higher end in the range have been static in pricing, and the fake Black Friday sales deals on 4080s and 4070Tis are basically nonsense.

But apparently, it may be worth waiting to see what the alleged 4070 Super, 4070 Ti Super and 4080 Super bring in January, rather than buying now.

1 Like

There is only one way Leatherjacket and his shareholders sets pricing, it can only be another disappointment in January. Cant fault them either, with no competition to speak of.

The reason why many 40 series cards aren’t in stock anymore is because inventory gets cleared before the new super refresh arrives early next year.