Plugin Development - OpenGL / Shader use

Hi there,

My company are considering development of a plugin for Rhino that would place new geometry in the Rhino Viewport and merge it with existing models. It would do this using OpenGL shaders and before we start on this project I wanted to ask the community a couple of quick questions to make sure what we want to achieve is viable. So here goes…

  1. Are there any issues with adding an OpenGL shader to the already existing OpenGL viewport instance?
  2. Can the DepthBuffer and ColourBuffer information be readily accessed using the SDK, in order that any rendering via the new shader can take this into account?
  3. Are there likely to be any issues with a high rate of change with the new shader, as it will likely be operating at 60fps most of the time.

I am not a programmer myself, so please forgive me if my terminology is a bit off, but I am trying to scope this on behalf of my company.



@jeff, can you chime in here?

Hi Dave,

I’ll try to address each of your questions in order…But knowing more about exactly what it is you plan to do would allow us to (most likely) provide more detailed answers.

  1. You actually don’t just “add” a shader to a viewport. Shaders are enabled and disabled accordingly at a specified time for a specified object. If there are no specific objects associated with the shader (i.e. it produces its own geometry), then placement of your code within Rhino’s pipeline will be important, but should be doable using Rhino’s display pipeline conduit mechanisms.

  2. Access to these buffers depends on what you mean by “readily accessed”. Both the depth and color buffers can be gotten at any point within the pipeline, however, what their contents will be will depend on “when” you obtain them. There is not an initial pass through the database to create a depth buffer that can then be accessed during the rendering process. The depth buffer can either be created as a separate object (by rendering the entire scene to an internal buffer), or it can be gotten at any time within the rendering process. The same is true for the color buffer.

  3. The frame rate (of any system) is not determined by a shader, nor is it determined by your requirements. You can say and require a 60fps performance all you want, but that doesn’t mean you’re going to get it. The overall frame rate is ONLY based on how fast the system can process and render ALL entities within the frame…and adding more complexity to that system will most undoubtedly lower the performance. For example: Let’s say a standard Rendered mode without any bells and whistles produces 100fps for a given model containing 1000 objects. If I duplicate all objects so that there are now 2000 objects, I would love to be able to say that 100fps will still be maintained, but I can’t. Why? Because twice as many objects are now being processed and rendered than before, so frame rates will drop. The amount they drop depends on what threshold the pipeline is currently operating at, but they will drop. Now throw in some complexity like real-time shadows. I guarantee frame rates will drop at that point. So depending on what your shader does and how complex it is, it too will lower the overall frame rate of Rhino’s rendering pipeline. So if Rhino is currently only operating at 10fps for the given scene, then your sahder will only decrease that performance, not increase it.

Now, if you’re only talking about certain types of shaders that can run independently of the rendering pipeline, like post-processing shaders that add special effects to the end frame, then yes, those can run as fast as you’re able to make them…but they’ll still be dependent on how long it takes Rhino to produce that end frame…hope that makes sense.

Again, if we knew more details about what it is you’re planning to do, we can probably provide more details closer to your target.

That being said, you guys should probably read up on Rhino’s SDK and the Display Pipeline in general to get a good idea on how the pipeline is broken down into its different channels, and how you can override or augment those channels. You can even override the entire rendering pipeline with your own (or even just parts of it)…it just depends on what you need to do and whether Rhino provides a way to do it… If Rhino can’t do it, then you should be able to do it yourself…it’ll be more work on you part, but you should still be able to do it.



Thank you of your detailed response. I can see that we need to investigate the rendering pipeline further before we embark on this project. I am encouraged by your comments in that it seems like Rhino is flexible enough to accommodate a good degree of access and even if it cannot produce exactly what we require we can probably shoe-horn our own solution in there. I guess that was what I was hoping to hear.
Unfortunately, at this time I cannot divulge too much more about our intended project, but hopefully this will change soon and I can post some more info about it.

Many thanks,