RealtimeDisplayModeClassInfo breaking change

Hi,

For those who are implementing a realtime viewport mode using RhinoCommon (RealtimeDisplayMode) will notice for the next Rhino WIP release that their RealtimeDisplayModeClassInfo implementation breaks compiling.

These changes were merged today (an hour ago from this writing), so any Rhino WIP release build after this moment will have the change. RhinoCommon realtime viewport plug-ins that haven’t the property overriden will fail to load.

I have added a new abstract bool DrawOpenGl property that each RealtimeDisplayModeClassInfo implementation must override. To continue using your integration as it used to just return false for the property. This should have your viewport mode integration continue working as it used to.

Why does this new property exist, and what good is it for? Good question, I’ll say.

The realtime display mode pipeline has been changed to allow integrations handle the drawing of render results into the viewport themselves. Once you return true from the DrawOpenGl property you can override the bool DrawOpenGl() function in RealtimeDisplayMode, as this is now being called. You don’t get a valid RenderWindow anymore, so you shouldn’t use that anymore when opting to draw render results manually.

For the Raytraced mode see

and

Digging deeper in the code you’ll find where I set up OpenGL drawing before I let Cycles draw its result

and actual drawing

https://github.com/mcneel/cycles/blob/master_rhino_patches/src/device/device.cpp#L111

For the Raytraced mode the new OpenGL-driven drawing greatly improves responsiveness even with CPU-bound rendering. Here sample with Intel OpenCL CPU:

Model from HoloMark 2 by @Holo

2 Likes

WIP not released this week ( :sob: ), but whenever the next gets released this change is in.

Sounds exciting!

Where can I read more about this? I see the new DrawOpenGl call in RealtimeDisplayModeClassInfo, and presume that returning true there is the first step, but then what?

Once you return true for the DrawOpenGl property in your RealtimeDisplayModeClassInfo implementation the underlying display pipeline does things a bit differently. First you won’t be getting a RenderWindow instance anymore. Instead you are expected to override in your RealtimeDisplayMode implementation the function bool DrawOpenGl() (like here).

This new function you need to add OpenGL draw code that draws whatever render result you have using OpenGL. For Raytraced this code lives in the C core code of Raytraced. The code for the drawing looks like: https://github.com/mcneel/cycles/commit/0b168114b737b870bf5e50ce785cfc2c660c773e and some opengl shaders to assist with the drawing here https://github.com/mcneel/CCSycles/commit/63de4598bc488a509eccdf49e4e4ba7599c09dd2

Hello Nathan,

I think I need a bit more help here. I am doing the GL drawing, but it doesn’t seem to connect to your code yet. I tried clearing the buffer to green, but I still see white.

Where in your code can I find the initialisation of vertex and fragment programs? Which GL context is set? Do you need me to swap the buffer?

here the wrapping ogl code, and

for glew and shader program initialisation.

One more question (sorry if any of these questions are noob-ish, I am not a GL expert): is all your GL code running in the main thread? I think that might be what is going wrong for us, since in our code we are rendering in another thread, and copying the image there too, currently.

Yes, the DrawOpenGl() comes from the main thread, so all of the OpenGL drawing has to run in the main thread. In Raytraced core the access to display buffers is protected by a mutex. If a draw is in progress the sample() function is blocking, and the other way around, when the sample() function holds the mutex the draw function blocks until the mutex is free. You may have different ways to solve it.

Blocking seems fine if the OpenGL is drawing in the main thread, and the renderer must wait, but at least for us, with our long rendering times, the opposite would be awful, and would just kill the UI until the next frame is ready. I will try it out, but I suspect we must come up with a different strategy. Unfortunately, this might well involve copying the buffer.

You are of course not obliged to use OpenGL to draw your render result. If using the RenderWindow works well for you, maybe it is the easier way to stick to without overly complicating your own code.

We were hoping to simplify our code a bit, and possibly getting some speed here. We also have some open issues in our descaling code, which using OGL would fix automatically, so there is that too.

Doing a straight copy will be a lot more efficient than the descaling and alpha twiddling we are currently doing, and it is of course happening in another thread. Btw, is it still necessary for us to set the alpha to 1?

What drives the request to draw on Rhino’s side?

You call SignalRedraw() in your RealtimeDisplayMode implementation. This sets a flag in the underlying display pipeline which gets checked by a timer event. If set a redraw of the viewport is issued, which will eventually resolve into either putting the RenderWindow contents into the Rhino OpenGL framebuffer, or let you use DrawOpenGl() to do that yourself.

With Raytraced I currently call SignalRedraw() after every sample pass over the entire scene. This can run in a separate thread, so it shouldn’t block the UI in any way. For huge scenes one sample pass in Raytraced can take quite some time as well, but with the SignalRedraw() possibility from the separate rendering thread it is not a problem.

Right now the timer checks 10 times per second if the redraw flag has been set, but you may have read elsewhere on the forum that there are some responsiveness problems while Raytraced is running - you may see similar problems if you move to drawing with OpenGL (assuming huge speed gain compared to RenderWindow usage - quite possible since using the RenderWindow means several times of extra copying the very same data around on the CPU), but we’re working on identifying and fixing the problem. As far as we know it isn’t the fault of realtime integration, but the realtime integration does expose the problem.

/Nathan

(Do we need to set alpha to 1 for VP still?)

It sounds like you would potentially have the same issue we do, i.e. what if the renderer is busy when the redraw call comes in. Do you have more than one buffer?

Not sure about that - probably (</guess>)

Raytraced indeed has several buffers. The RenderBuffer in which the actual samples are collected, and a DisplayBuffer into which the contents of the RenderBuffer gets tonemapped. The drawing tries to acquire a mutex called display_mutex, tonemapping also acquires that mutex when we get to that stage. Slow passes shouldn’t matter as such, as both tonemapping and OpenGL drawing are generally both fast operations compared to the path tracing.

/Nathan

edit: P.S. thanks to your question I realized I could move the display_mutex acquisition much close to the tonemapping scope, making Raytraced a bit more responsive :slight_smile: Thanks!

Haha, you are welcome :slight_smile:

I went back and re-read your initial post and the supplied code, and I am pretty sure how I will do it now. I should have done this earlier, sorry about all the banter.

By the way, 10 redraws a second is really very slow. Any chance of bumping this to 30, or maybe 60?

No problem. This is how the mind works. Discussion helps, in more ways than one can anticipate :smiley:

Depends on if we can find the problem (we think) we have with the message pumps getting flooded, resulting in freezing or weirdly behaving UI (dialogs popping up behind the Rhino main window). Note that i.e. rotating the view will redraw as fast as it can. The timer is really for when there are no user-interaction generated redraws.

Your render engine can still process many samples during the intervals without having things block. It just can potentially mean that the sample counter in the HUD jumps larger amounts (i.e. tens or more samples if it is a light scene). And after a while higher redraw frequencies probably won’t contribute to the visual aspect of the viewport anyway, as you probably have already pretty acceptable converged results.

But the 10 times a second is a temporary stop-gap measure.

So, I have implemented this, as best I could, and it works, and it is fast… although there are some issues.

One is that our HDR environment is no longer visible. I think this might be because of the alpha issue. In our own copy routine we would set the alpha to 1.0f, but here we are not doing that.

The second issue is that we get a lot of flicker. Raytraced also gets a lot of flicker, but it looks like this might be happening because of jumps in and out of descaling (we don’t stop descaling until the user lets go of the mouse button). For us, it looks like there is an extra white frame between every frame we render, while navigating. We do not clear any buffers anywhere. Do you have an idea what might cause it?