An nVidia RTX 4060, 4070, 4080, 4090, A6000, A3000, Others Chart

Yes is out

The 5000 series are coming around 2025 or sooner.

“RTX 5090 is predicted to make its grand debut around the fourth quarter of 2024”

Everyone seems to want the most amount of vram possible. Though the amount of vram you need depends on the scene/filesize you work with and render. I bought a 3080 on release, it “only” has 10GB and back then I was very aware of the possible limitation. I also was aware that the products I work on don’t result in huge files. When rendering I don’t use any textures. So for my type of work 10GB is more than enough.

All I wanna say is that depending on your work, you might not need to get a 16GB+ card and a 12 or sometimes even 8GB card might be just fine.

Question:
Does anyone know if more lights in a renderscene require more vram for rendering/raytracing?

1 Like

For 1920x1080, and few textures, yes, a 8GB might be fine.

I have have run my 8B GTX 1080 out of memory–even on some 4K renders.

Perhaps @nathanletwory , might chime in as to whether or lights take more VRAM.

[I think that in earlier versions, Cycles preferred emissive light sources because that’s sort of how it handles it’s bounces internally. My vague understanding of it, it that it’s like it preloads what would be a bounce on a surface, like an initial condition.[

1 Like

I am trying so hard to to buy a 4090, and you aren’t making it any easier. : )

1 Like

Render lights don’t take up necessarily much memory, mesh lights can if you have large meshes in use.

More lights typically mean longer rendering times. With the update to Cycles 3.5 in Rhino 8 WIP there is now internally a light tree concept, but I don’t know enough to say anything useful about it just yet.

3 Likes

Thank you for chiming in, Nathan. Sorry to put you on the spot like that.

I’ll see if I can make a render test plugin while I get around to complete Holomark3.

@nathanletwory are there any ways to set a fixed render output for both Windows and Mac, and also set the render quality? Or are these stored in the Rhino file so I can just call Render?
I’ll code this in python so using Rhino Common is fine. It would be nice if I could get some info on when the render is completed so I can output the time used. If not then users will just have to read that out from the render panel.

I presume rendering is easier than activating raytraced mode at a fixed viewport size.

3 Likes

Hi Holo,

You are going to put a few 2k and 4 textures in there?
(ducks)

BTW, thank for working on Holomark. : )

Oddly, it would seem that if there was a decimated version of Rhino that would run Holomark, which could be freely distributed, it might be good PR for Rhino.

1 Like

AI demand for GPUs has created GPU shortage: Why GPUs are New Gold - YouTube

Sigh.

Absolutely agree. It depends on your use-case. I find that you usually need a lot of VRam for anything real-time.

If I am running D5 Render for example with just a simple example scene, I am using 7 of 8 GB of VRam of my RTX3070. Windows 11 alone needs around 2GB for my 2 x 4K screens.

Opening Unreal Engine 5.2 with a normal sized project, I get the following message:

That means the 8GB are not enough and it has to push some data onto the shared GPU memory.

For GPU-based rendering using VRay I have never noticed any memory warnings, but I think it automatically uses the Shared VRam when rendering.

Seeing how a lot of rendering is pushing towards GPU-based and real-time rendering, I expect this trend of higher VRam requirements to continue. For a new graphics card that is going to last for 2-3 years I would not recommend below 16GB IF you are are interested in or going to use any real-time rendering capabilities.

If your workflow mainly consists of non-realtime rendering, I think you will be fine waiting for prices to come down or new models to come out, if you have 8GB of VRam.

As for lights taking more VRam, it is usually not the light itself, but the Shadowmap that is taking up VRam. This is especially true for point lights, which use Cube Maps (so 6 textures) for shadows. Again, this is only really an issue when using real-time rendering and will very much depend on the software.

2 Likes

I am still hopeful that they might come out with a 16GB 4070 or 4070ti, using binned 4080/4090 parts. The problem is: they may have underwhelming 16GB 4060/4060 Ti sales, and not notice that perhaps for right now, the a 4070/4070ti would fall into the old “enthusiast” category.

I think that the 4090 Ti is shelved for now.

From what you stated that it has shadowmaps, Vray, is not a path-tracer?

I’m not suprised the 4090 ti got shelved. At least in the UK, we can still buy new 4090 Founders Editions, which is remarkable. Though, the prices are still reasonably buoyant for the flagship as can be expected.

With the insane and bizzare policy of “the more you buy, the more you save”, I have a hard time seeing new variations of the mid- to high-end 40 series being made; given the range is entirely releaed around encouraging upselling.

I’m still reasonably sure I’d not consider swapping my Arc A770 for anything less than a 4090. So in effect, nvidia has succeeded in upselling it to me, because everything else is between “meh” and woeful value.

No, I meant that V-Ray does not use shadow maps in the way that proper real-time renderers like game engines do, which will often hold a lot of shadow maps in memory. V-Ray does do path tracing of course, but it has different rendering modes (CPU, GPU or RTX), which then also makes a difference what, if anything, gets loaded into the GPU memory. V-Ray also has different ways of caching lighting solutions, but in essence they are very different to a shadow map, which stores the shadows of 1 light in a texture.

1 Like

Each day, I grow weaker. For today, I am staring down the icy blue barrel of 3-3/4 day render…

Living on a fixed income, it’s a bit insane to even consider the 4070. The idea that it could be done 9 times faster is astounding. I just want something that can render 8k by 8k, and that should hold me for 3 years, so I had been hoping that someone at nVidia saw the gaping hole in their product line, as the 4060 could be had with more memory than the 4070, as in WTF?

I like the 4080s higher memory, but $1200 is a lot of money for me, right now.

Money aside, I like that the 4080 has a lower power draw than the 4090, which means less noise, as I am in a hot 10x12 room, in the heart of Silicon Valley. Let’s see, in the Winter, 750 watts heats this room, well. The 4080 takes 320 watts, the 4090 takes 450. In the Winter, even the 3900x and GTX1080 add warmth.

I would probably go for the 16GB RTX4060Ti. It has a lot of memory, meaning it will be more likely to last you quite a long time, while having pretty decent performance. You can get it for around 500 Euro.

Chicken and egg problem I know, but it is unfortunate that so few renderers offer any hardware accelerator support beyond CUDA/OptiX. Maybe it will change eventually.

Cycles seems to be furthest ahead in terms of cross-compatibility; now supporting all three vendor API implementations in full raytraced acceleration.

Perhaps there may be additional hope for those who cloud render, with Bergamo and Sapphire Rapids CPUs on the way, while it won’t dent GPU speeds too much for specific activities, perhaps it brings the cost of cloud rendering down?

1 Like

Maybe have a look at the used market. Not sure but especially in the USA there should be plenty of last gen cards offered for a good price. consider 3070 or 3080 (12gb). Like mentioned in this video:

and again some numbers from blender:

The difficulty is discerning which have been hammered in a 24/7 mining rig…

I have included a few in the comparison chart. It is a good idea, but the cost per score was not as low as I had hoped.