Does it still make sense to spend effort on developing rendering engines (like Rhino renderer)? UE5 seems like unbeatable for future. Who would now spend time to create still images when you can create reality in real time in UE5?
I just cant think of a reason why i would spend hours to render one picture instead of setting up a dynamic scene.
I just feel huge assymetry between whats achievable with rendering and this next gen game engine.
We design a lot of lamps and interiors. From our point of view, there’s still some work to be done on the precision of lights/emitters and reflection/refraction/caustics in the real time render engines, before they can replace the offline renderings we produce. But there’s no doubt that it’s getting there, and I’m sure the end result will be raytracing speeds that are close to real-time, with the ability to turn off things like caustics if you need higher speeds - that’s already being done in Keyshot, where their GPU mode will disable some of the more advanced materials and the results produced by the GPU aren’t quite as precise as the CPU - and if you use the VR part of Keyshot, even more features are gone. But give it another 10-15 years, and I’m sure it will be up to par.
Just my 2 nickels
Notice most demos are diffuse surfaces or opaque objects. UE5 is fine as long there is little or no glass. The moment refractions and indirect caustics are involved. It’s game over. (They have special hacks to simulate water only).
OK, so it seems I should have said “give it another 10-15 hours” instead of years OK, so it’s not realtime just yet, but the just-released Keyshot 10.2 update actually runs caustics faster on the GPU than on the CPU (depending on your hardware, of course). I’m obviously getting old!
Hey, actually the techs been available for a while, just seems no one knows about it…
I recommend you guys look into Nividia Omniverse for a ready, off the shelf solution… its still early days but uses the same backend as my unreal setup, reason im still on unreal is it gives far more flexibility in shader creation.
My unreal is a custom build of nvidia’s github caustics branch, its technically a “dated” tool (i will probably switch to omniverse eventually) but its very fast, stable and imo production ready.
Excuse the terrible video but I hope serves as a proof of concept.
It does have some small manageable bugs but overall it will do everything in real time (all 8k tex, displacement, gi, caustics etc…). Now I have a studio double this size running at 80fps in editor and 16xAA 4k square at 2 secs a frame.
Its running on RTX3090 with i9@5ghz + 128dram, for anyone who want to try this, while the only requirement is RTX, I highly recommend as much dram as possible to keep editor running efficiently.
Heres some more recent renders, these are all originally 4k.
what doesn’t have connection with rhino3d? Rhino is everywhere, I remember a few years back someone spotted Rhino on the AutoDesk Office Tour, another person spotted Rhino on a Pixar Office Ad.
Sometimes people talk about how McNeel can stay in business with a software so strong and yet relatively cheap. I can just wonder how big the market share of a tool that don’t compete with others, but just can do basically anything and complement every tool on the market.
I myself was active in the 2d documentation wishes for Rhino 8, meanwhile, I found two articles of freaking bridges being build with no paper whatsoever, only with AR and tablets.
Anyways, about the topic, real time rendering takes a lot of computer power, it may be a reality for companies in the US and Europe, however, at least in my experience, its something companies are not willing to invest in here in South America.
I think this is for the full Omniverse package, including the Nucleus server etc. I believe that some of the basic functionality will remain free for individuals, and hopefully they’ll provide some sort of stand-alone license for the RTX renderer and physics engine… but I haven’t been able to find anything specific on this.
I was aware of the Unreal Engine 5 before I stumbled upon this thread. Nanite is new feature of the Unreal Engine 5. It is an algorithm dividing complex mesh into hierarchical pyramid of meshes. One big, low-resolution mesh is at the top of the pyramid. Small, high-resolution meshes are at the bottom of the pyramid. For obvious reasons this algorithm does not work well for transparent objects. It is not clear why it does not work well for foliage - incompetence seems to be the most plausible reason. Almost all the work is done by the GPU. @nathanletwory These hierarchical meshes would improve performance of Rhino Render command.
Nanite can handle orders of magnitude more triangles and instances than is possible for traditionally rendered geometry… How does Nanite work? During import: meshes are analyzed and broken down into hierarchical clusters of triangle groups. During rendering: clusters are swapped on the fly at varying levels of detail based on the camera view, and connect perfectly without cracks to neighboring clusters within the same object. Data is streamed in on demand so that only visible detail needs to reside in memory. Nanite runs in its own rendering pass that completely bypasses traditional draw calls. Visualization modes can be used to inspect the Nanite pipeline… Nanite mesh is inherently a hierarchical level of detail structure that relies on being able to simplify small triangles into larger triangles and choosing the coarser one when it determines the difference is smaller than can be perceived… Forests filled with tree canopies made up of individually modeled leaves almost certainly won’t run well, but using Nanite for tree trunks and branches might. source: Nanite Virtualized Geometry | Unreal Engine Documentation
It really is!
Been using unreal as my main engine for a year now… I have to admit it… took a very long time to iron out all the bugs but it’s quite amazing once all setup. There’s still limitations like incorrect reflected dof and vram size (even with rtx 3090)… Etc… But worth it for the speed and caustics.