In the past, at least in part due to a question I asked, the DisplayConduit.ObjectCulling event has been added to RhinoCommon, for which my thanks
My display conduits look much better now! A small issue remains, however, which is of a cosmetic nature mostly. When I try to select an item in the viewport that has my objects drawn and I get a selection menu, the selection menu preview does not respect the ObjectCulling. In the picture below, the Perspective viewport should not show the points and lines that are in the Top viewport. When I select on the intersection of a point and the curve, the selection menu pops up and shows the dynamic preview in a white/pink color also in the Perspective viewport.
Iâm not sure if the selection functions pay attention to conduit object culling. @jeff or @mikko, do you know if the selection filters out objects based on display conduit object culling?
No they do notâŚIâm not even sure how they could (currently).
This is the problem with display based (only) object culling or object drawingâŚit does not actually cull the object from Rhino, it just keeps the object from drawing (which is not the same thing)⌠Rhino, and thus the picker, still thinks the object is there and therefore still allows you to pick it (or multi-pick it) or window select it.
Except for detail views, I do not believe Rhino has a way to cull objects from specific views/viewports using the SDK if/when an object is physically within the viewâs frustum.
This is similar to the problem we had with bounding box additions not taking affect on commands that depend on the overall scene bounding box (i.e. Zoom Extents). We fixed that by adding a new channel and then updated the commands to run in that channel and use the resulting bounding box instead of the documentâs.
The only thing I can think of as a possible solution (on our end) is that the pipeline keeps track of any ObjectCulling objects, and then the rest of Rhino could query the pipeline for object visibility, which would then include all culled objects as well as standard frustum visibility tests. As it is right now, there is no way for other sections of Rhino (or plugins) to know which objects any given display pipeline has culled from its object lists.
So to summarize⌠The whole problem here is that the ability to say âDonât draw this object in this viewâ is not and does not mean the same thing as âHide this object in this viewâ⌠I agree it should, but as of right now, it does not.
Thanks Jeff,
Thatâs what I thought was the case, but wasnât entirely sure since we did add the code to allow conduits to participate in âzoom extentsâ.
Thank you all for your detailed answers. Like I said, for us it is a cosmetic issue and the availability of ObjectCulling in RhinoCommon is a significant improvement over the situation we were in up until recently.
I know this is an ancient topic, but I was looking up object culling and there it was⌠I have a number of extensive display conduits built, but I never used object culling. Can anyone post an example of how its implemented? In my code I am constantly updating the collections from which the conduit will draw; I guess I am pre-culling the conduit so to speak.
Alternatively, I am looking for a way to speed up my conduits. They are fast, but in some cases I am asking a lot of them. for example, I have an object which has been populated with voxel like brepsâŚlots of them. I am assuming that depth testing is attempting to cull objects which are behind each other if I am drawing during the âPreDrawObjectsâ handler. But my guess is that it would be even better if I didnât even try to draw things that were behind each other?
My draw method (called by PreDrawObjects) is:
public void DrawPipesCurrent()
{
Display.PushModelTransform(StockXform);
bool test = uiData.SolidPipes;
Rhino.Display.DisplayMaterial tempColor;
if (!PoseBrep)
{
foreach (KeyValuePair<Mesh, sd.Color> pipe in CurrentPipeMeshes)
{
if (test)
{
double trans = 1 - (double)pipe.Value.A / 255;
tempColor = new Rhino.Display.DisplayMaterial(pipe.Value, trans);
}
else
tempColor = new Rhino.Display.DisplayMaterial(pipe.Value);
if (Mode.ShadedPipelineRequired)
{
Display.DrawMeshShaded(pipe.Key, tempColor);
}
if (Mode.WireframePipelineRequired)
{
Display.DrawMeshWires(pipe.Key, sd.Color.LightGray);
}
}
}
else
{
foreach (KeyValuePair<Brep, sd.Color> pipe in CurrentPipeBreps)
{
if (test)
{
double trans = 1 - (double)pipe.Value.A / 255;
tempColor = new Rhino.Display.DisplayMaterial(pipe.Value, trans);
}
else
tempColor = new Rhino.Display.DisplayMaterial(pipe.Value);
if (Mode.ShadedPipelineRequired)
{
Display.DrawBrepShaded(pipe.Key, tempColor);
}
else
Display.DrawBrepWires(pipe.Key, sd.Color.LightGray);
}
}
Display.PopModelTransform();
}
CurrentPipeBreps contains 24k breps. It takes about 400ms to draw which slows down the ui. Alternatively I could try using low poly geometry while dynamic drawing is on (user moving the camera). Just wondered if it was worth trying to do my own depth testing, or is this going to be far too slow?
I canât really say if writing your own occlusion culling algorithm is going to improve speed. It can be pretty tough to write.
I do notice one thing you can do with the above code; minimize the creation of DisplayMaterial instances. For wireframe drawing you donât need to create any materials. You could also sort your CurrentPipeMeshes by color and only create new materials when the color changes.
Thanks Steve,
Right after I posted I did get rid of the ctor calls, and instead just changed the property; it didnât help much though. I will keep thinking about it a bitâŚ