1- when clicking on the pause (or start) control in the viewport for Raytrace, it always selects the objects behind. Also, the controls are still far too small and very close to the left hand side bar which means often you get the cursor for resizing the side bar rather than hitting the raytrace control. There is no reason I can see why these controls can’t move away from the edge a bit and ideally get a bit larger
2- Is there a way to control the number of samples when outputting via the render panel? Draft seems to be 50 samples. Good (the next step up) is 500. That’s a huge difference. There is a control in the render settings called ‘session’. Is this something to do with it?
3- As mentioned elsewhere in passing, the depth channel output doesn’t seem to work reliably. A couple of times I got a really nice depth map ranging from white to black and a smooth gradient between the two. Mostly though, I just get 99% that’s either black or white an barely anything between.
4- I’d like to be able to export a single layered file that has all the channels as separate layers. This would make saving out renders much easier. Or at least the option to export all channels as individual files into a folder. Perhaps provide some kind of interface for selecting which get exported if that’s preferable.
5- Tone mapping is rather tricky to get right. It would be great to have some kind of automation here to get it close to being right
6- Some kind of exposure controls in raytrace (all implementations ideally) would be great, including camera aperture / DOF settings
thanks and sorry if some of these have been addressed elsewhere or they are already implemented and I’ve just not found them.
Thanks, I hadn’t tried it before because it didn’t look right to me. Shows 10 when I think it is actually showing 100, just the text field is cropping the text (see above screen shot)
Please can you give me a link to share the file?
Thanks, seems I’m repeating myself!
Yeah, fair point. We are in a world of computational photography so I guess we just become accustomed to pressing a button and a load of hard work being done for us in an ISP. Perhaps its a question of having a target for the histogram and provide a few presets that generally produce decent results. Users can then tweak from there? Dare I say it, is it something that ML could help with?
To be honest, I think exposure controls would go a long way to helping with the tone mapping challenges. If you can get the initial render looking closer to what you want then the tone mapping isn’t having to do so much work.
It would be very helpful to be able to line up multiple renders in a queue. Initially it could be just choosing different views to render in sequence. In due course it could become more advanced than that with selecting different lighting and render options.
With now getting pretty decent, fairly high-res renders out in ~10-20 mins with the intel denoiser, it would be great to be able to line up half a dozen over lunch or up the settings and leave it going over night.
A necessary thing with the above would be to be able to set a save location for the render so that they are saved to disk / server as they are completed. It would be a real shame to lose 6 renders because the machine crashed during the 7th.
Another one for rendering output. When rendering with transparent background turned on, the image that rhino outputs if without the alpha channel has a 100% black background. It would be nice to be able to select what colour it is in the post effects panel.