BUG - render batch memory usage

I made a macro that batch renders over night, about 1400 images. In r7 I did this already a few times without any issues. With r8 arriving back in the morning, r8 had crashed without finishing the batch. HWiNFO64 tells me that the physical as well as the virtual memory were fully utilized at some point during the night.

I saved the file in r7 format, made sure all settings were the same and retried the next evening. Only thing I changed was to lower the sample count from 1500 to 500 since r7 would take like two whole days with the same sample count. Next morning r7 finished the batch fine and no increase in memory usage was observed.

I started the batch again in r8 after updating to the new r8 beta. After an hour of rendering and looking at taskmanager I can tell that the problem is still present. Started at about 7-8GB usage, 14GB after one hour and it continues to slowly climb over time.

Short:
Rendering multiple images in r7 works fine and the memory usage stays constant, whereas r8beta has increased memory usage over time and inevitably will crashes when running out of memory.

I assume this is a memory leak?

Can you show me the batch macro you are using, so I can see what else is involved beside the renderings?

1 Like

RH-77045 Rendering through Frame buffer increases memory usage

1 Like

Reading through the yt it looks like you have a fix but the core issue isn’t yet resolved. I have no idea but since this is important to me I’ll just throw in something here. In the menu bar of the render window is “File > Recent”. Maybe those recent renderings are related to this issue? I’m just poking around in the dark.

I’ve been following your progress on this in the yt. Sad every day without progress, cheering every time you made progress. In the latest beta it is mentioned as fixed and so you say in the yt. So I started another render batch yesterday evening with high hopes.

Coming back to the office today I sadly have to report that it still isn’t fixed. Same batch/macro of roughly 1400 renderings, 1536x1024 at 1500 samples. Before it only managed to render about 3-4 hours before the memory was full, now it managed to render for about ten hours before rhino is out of memory again. That’s about two thirds of what I currently need to render in one batch.

It had rendered from roughly 22:00 to 09:00. I came back at 13:00, rhino had stopped rendering but hadn’t crashed, this is task manager:

Hope you can finally get down to the root of this.

Hi, I will make new tests to see if I can repeat. In the test I did I did not find any significant memory increase. Can you let me know the resolution your are currently rendering at?

@hitenter still doing tests here but I have to confess I closed the YT too quickly. No further info needed at this point.

1 Like

Btw: as a side note, did you try rendering with use of a denoiser? I suspect you will be able to achieve good quality images in much less samples.

1 Like

currently I don’t use the denoiser. I need to render surfaces with fine grainy structure. a while back I wasn’t happy with the result I got with bump mapping so instead I used lesser sampels to keep just the right amount of noise to somewhat fake the grainy structure.
I now get better results with physically based materials. one rendering (1536x1024 at 1500 samples) takes only about 20-30 seconds and this is already free of noise. the new cycles implementation brings a very nice speed improvement. I’m affraid the denoiser would just wash away the grainy structure of the material.

though I think I should give it a shot anyway, maybe the results will be different than what I expect.

keeping my fingers crossed that you’ll find the problem sooner than later.

I currently use 1536x1024 at 1500 samples only. target is web, high resolution isn’t necessary. now I remember that there is one render setting that I changed. the tile size. I think the default is 256. I made a few comparisons and 512 is just a tiny bit faster.

-RhinoCycles_SetRenderOptions t 512 i 512 Enter

might the tile size have an effect maybe?

Not sure if you have seen it but I reopened the issue and added new findings.

1 Like