How will I change my Default Dynamic Memory Limit? I can’t seem to find any options in Vray settings to change the default… Is there anyway? I have 32 gb of RAM on my PC and I cant seem to render scenes with Lots of Vray proxies, Vray Furs and displacement maps without my memory to go 100% percent. What should I do in these regards?
Vray Fur and Displacement is the biggest memory killer.
The “Dynamic Memory Limit” is apparently some Vray feature to limit the % of available memory to use, and it doesn’t even apply to Rhino…
If it’s running out of memory you need more WINDOWS virtual memory. It’s also possible there is something ‘wrong’ with your scene sending the demands to the moon, or you’re just making absurd demands on it.
Is there any wa get past that? Can I cover larges area with Vray FUR without killing the memory?
I don’t think you’re supposed to cover square kilometers in “fur” like what are you even trying to do? It’s obviously exponentially more elaborate than just scattering, it’s meant to cover a character.
Start with seeing how big an area you can cover with the RAM you’ve got.
@Mainul_Hasan_Seam
If you want to cover large areas with grass and so on, Scatter is the better option.
I like to use Grasshopper to divide the areas visible to the camera, multiple Scatter areas for depending on the camera setting.
scatter camera clipping is coming soon!
@Nikolay
Yes, unfortunately there is currently no direct way to do this (camera clipping) but I’m looking forward to it when it comes. At the moment I’m using Grasshopper to create individual areas for each camera for later rendering and then optimizing them, as you can only define multiple areas for one object using Grasshopper.
It would be good if you could add multiple scatters, displacements, etc. to an object in the Vray browser without having to access Grasshopper.
it is an offtopic, but nevertheless
currently multiple modifiers on a single geometry is being worked on. I hope it will be done for V-Ray 7.
Multiple scatterers on the same target doesn’t make sense. You will have no control over collisions and on instances piercing thru each other. If you want this anyway, you just duplicate the target object and hide it so it is still serving as a host but doesn’t render.
multiple displacements … probably makes sense, but will explode in memory requirement and floating point precision issues. multiple furs and a combination of displacement + scatter or + fur is what we’re generally after.
I was primarily referring to multiple modifiers for one object, but without camera clipping, multiple scatters via Grasshopper are useful for defining individual areas, but with camera clipping you obviously don’t need them anymore, one scatter is enough for one object.
With camera clipping, one scatter plus fur etc. is enough, but currently without camera clipping, Grasshopper is perfect for defining individual areas.
then, you just need to wait for it
Yes, I use Scatters for the most part, and it works fine actually. The memory consumption is relatively low.
But again when I use vray proxies in a scene, it really consumes a lot of memory. I had to use Virtual memories to back that down but it’s just painstakingly slow.
I wonder If I can really render a bit complex scene (Its nowhere near very complex tho) in vray. It just takes so much ram actually. Whats the solution? Is vray not built for large projects???
By default there is no limit - V-Ray will eat any available bit of the RAM. There is no option exposed in Rhino. You can change the value via script or by modifying an exported scene file. Mind that reducing the memory limit will not make things render faster.
“Large projects”, “Huge scenes”, etc, is quite subjective. V-Ray is built any project, if you have enough CPU/GPU power and available memory. I’ve seen scenes tens of GBs of size, and there are certainly larger. You can check the scale of your project by looking at the frame buffer log pane.
It lists the number of triangles, memory used by light cache (if you use it), the memory used for the embree tree, among other things.
Typically large memory consumption and low rendering speeds are attributed to either complexity (geometry, materials) or render settings. Or both. There are number of techniques to optimize а scene for faster rendering speed but no universal solution.
A couple of easy things to do is to use swarm to distribute the rendering on multiple machines, use multiple GPUs or use Chaos Cloud.
I can’t do much without some figures. If possible export a .vrscene and submit it to Chaos Support. Guys there will further advice what could be the problem and how could it be overcome.