Vray 3.6.03 smooth mesh settings causes out of memory issue


(ryan6565) #1

I just started using vray for Rhino (coming from Vray for max and keyshot) and I’ve ran into an issue regarding large assemblies from Solidworks via IGS format. I have no problem rendering GPU +CPU with mesh setting on low, however, when i change the basic setting to smooth the render quickly finishes with an out of memory error.

I have a nvidia 650gtx with 1gb memory and was wondering is this the reason for my problem? Again, I’ve used vray for max and never had issues with much larger cad files, same with keyshot using the same workstation.

Was thinking about upgrading to a 1070 or 1080 if this would solve my problem.
I have used an older version of vray for rhino a few years back and don’t recall having this issue.

my system also has an i7 4770 with 32 gb of ram


#2

Well you probably need to play around with the custom mesh settings to get a better balance of quality vs memory usage, long story short, a large imported assembly may well produce a catastrophically dense mesh with the normal “Smooth and slower” setting. First thing would probably be to turn ON “Jagged Seams.” Then set the “Density” slider to a low value, and then the most important setting would be the “maximum distance edge to surface” tolerance…that’s a start.


(ryan6565) #3

Thanks Jim,

I understand the mesh settings and change them quite often, its more of understanding what the limitation is on the vray side and if upgrading the graphics card will let me use a smoother setting. Because again I can take the same model throw it in vray 3ds max or keyshot and have no problem.


#4

Well it’s hard to say if you’re comparing apples to apples at all, the mesh you’re going to get will be totally different. Is it running out of memory as in Windows, or is Vray actually telling you its failing due to not enough video RAM? 'Cause 1GB certainly isn’t really a lot, it’s pretty antiquated, I don’t imagine any GPU rendering tool can actually do anything with it.


(ryan6565) #5

Hi Jim,

Yeah that’s preciously whats happening, I just wanted to make sure it wasn’t an issue with ram utilization or some bug in Vray. First thing I thought was the graphics card because as you stated above its not really that much. Sounds like I’m overdue for a new grahics card lol


#6

You can check your RAM usage per Windows task manager. The GPU memory usage can be checked for example per free tool GPU-Z. I’m quite sure 1GB isn’t enough.

How much polygons are used? You can check it per _polycount at Rhino.


(ryan6565) #7

Hi Micha,

about 1.5 million which isn’t that much for me, I’ve switched it back to CPU only as the gpu rendering is blurring bit map textures as well and seems fine so far.


#8

I think 1GB is not enough, since we shouldn’t forget that the GPU memory is used by WINDOWS and Rhino too. So, best forget GPU rendering with 1 GB. :wink:
I’m using a GTX1080ti and I’m very happy. A single GTX1080ti seems to be much faster than my DualCore Xeon with 32x3.3GHz. At VfR Next we will see a hugh step forward for GPU rendering, so, if you invest in a good card now, it will be it worth for the future.

Blurry textures in GPU mode - did you have seen the advanced GPU options? Per default textures will be scaled down for GPU use. This is most important for small GPU memories. If you would buy a card like the GTX1080ti than you could use 11GB and you wouldn’t need the scale down anymore most.


#9

Hi Micha,
have you delved deeper into this? Have you found mip-mapping to cause performance / quality drops in comparison with original size textures?

Cheers
Jonas


#10

I’m not sure the performance drop down, but I expect a quality drop down. If you like to use a fine detailed high res texture and scale it down to default 512x512 pixels, than this should be visible. But I haven’t so much GPU rendering experience yet, since for my daily work I use VfR2 and I used the GPU rendering for some tests only.

I did a quick test - a fine textured carpet texture - left side original size and right side automatic down scaled to 512. A totally different look. Best you try it at one of your scenes.


#11

I think resize to 512x512 is mostly useless. On demand mip mapping could be the “smart” solution:


(ryan6565) #12

Yeah the GPU is definitely scaling the textures down, I’m new to this combo GPU/CPU for a final production render and was really impressed with the speed vs quality. only thing was the artifacts from the low resolution mesh and texture map scaling. I’ve since switched to CPU only and seems fine. From what I’m hearing, i’ll have to upgrade the card to get the most out of this gpu/cpu option.

I was looking at the 1070, nicer price tag and still seems like it would be a huge jump from the 650gtx. I’m also maxed out of Ram when it comes to my MB, I have 32 here and 64 at work, and its starting to become a factor, especially when dealing with very large mesh files.


#13

I tested GPU/CPU too, but I wasn’t so impressed by the combo, more by the speed of the GPU. The CPU doesn’t help so much, some times a few percent only. My dream will be two GTX2080ti with Nvidia link at 22GB.


(ryan6565) #14

That would be amazing! I can’t even imagine what it would be like to have 22gigs of graphic memory at my disposal… :pray: lol


(Andrea Montis) #15

I am testing vray next for rhino beta in hybrid mode, GPU+CPU (tr 1950x +gtx1080ti 8gb).

In Hybryd mode my test renders take half the time in respect to only CPU.

The only issue is that I am forced to use hybrid only for product or studio scenes, due to the memory lack of actual gpus.
Working mostly on medium big archviz scenes my ram consumption is often more then 20 gb…
Anyway to answer your initial question, you can’t use a 5 years old gpu and think to have any advantage.
Windows 10 alone takes 1 gb of your gpu memory for your screen view.
Buy a card with at lost 8 gb of ram, and with the higher number of Cuda cores possible.