Algorithm to frame selected objects

I have a script that generates a lot of loft shapes using a loop and arranges it in a grid. What I noticed is that the display meshes are only generated if the objects are in view. If I start the loop with them not visible then move the camera to show the whole grid it takes a long time to generate all the meshes.

What I would like is to automatically move the camera of the viewport to at least roughly have the whole grid in view.

There is a great little component that comes with Heteroptera called Camera Crane, that allows you to adjust the camera of a specified viewport.

Now the actual question: Given I have the bounding boxes of all the objects to be created and the view parameters of the current viewport, how do you go about calculating the transform to move the camera so that all the bounding boxes are in view, ie. the same that Rhino’s Zoom Selected does.

The part where you move the camera to point at the center of the bounding box around the shapes is easy, but how do you do the proper movement in the cameras Z axis?

It’s hard to find anything online as to how that is done. I know it involves converting from a perspective to a parallel projection.

Any ideas?

For the example to work you need Heteroptera and Human.


zoom_selected.gh (20.2 KB)

This might work for you:


200326_ZoomSelected_GHPython_00.gh (9.0 KB)

5 Likes

That’s great, thanks Anders.

I would still like to know what the actual algorithm behind ZoomBoundingBox is.

1 Like

https://developer.rhino3d.com/api/RhinoCommon/html/M_Rhino_Display_RhinoViewport_ZoomBoundingBox.htm

Where does it show the algorithm?