Three-Point Perspective Matching with Grasshopper

Three-Point Perspective Matching in Grasshopper

Although Rhino isn’t primarily designed for photomodeling, I challenged myself to create a Perspective Match definition in Grasshopper. The main obstacle in modeling accurately from a matched image in Rhino is the inability to zoom or pan in a locked viewport, as the camera can’t shift in the X and Y directions. Despite this limitation, I developed a Grasshopper definition that can match an image to a 3D model, similar to software like fSpy.

Overview of the Process

  1. Loading the Image
    First, the image is loaded onto the XY plane in Top View. Using sliders, the image can be positioned outside the workspace.
  2. Setting up an Axis System
    To define the model in 3D, an axis system must be established. Axes are placed on the model; they don’t need to pass through the anchor point, as the definition adjusts this later. The angles between the axes don’t need to be orthogonal if a different setup better fits the model (e.g., for an irregular structure like the Pentagon). An anchor point is placed to mark a reference point shared by both the image and the model.
  3. Initial Camera Positioning
    Position the camera relative to the model to approximate the camera position in the image. This initial match aligns the control lines (used to construct the vanishing triangle) for better positioning.
  4. Constructing the Vanishing Triangle
    Control lines are aligned parallel to each chosen axis and color-coded. In three-point perspective, parallel lines converge at a single point, forming the vanishing triangle. Grasshopper then aligns the model with the image based on this triangle. For cases with camera shift, the definition can create an extended image frame, centering the principal point. This modified image can be saved as a wallpaper, paired with a saved view in Rhino.


(Image Pentagon by David Shapinsky Creative Commons Attribution-Share Alike 2.0 Generic)

Theory and Concept

This approach is inspired by the following video: Coursera: Computing Intrinsics from Vanishing Points.

Using the defined axes, we create an axis system for a tetrahedron (explained in the video), within which the camera is positioned. This tetrahedron shares the orientation of the selected axes in the model. The lengths of the tetrahedron’s axes determine the placement of the vanishing triangle. When axes are orthogonal, Pythagorean calculations suffice; otherwise, it gets more complex. This complexity is discussed in detail in this paper: Three-Point Perspective PDF.

My Grasshopper solver handles this complexity without advanced mathematics, leveraging Rhino’s capabilities.

Solver Functionality

The input to the solver is the vanishing triangle, constructed with control lines, and the angles between the model’s three main axes. Each side of the vanishing triangle corresponds to an angle in the axis system, which I treat as an “Inscribed Angle”. By revolving each side 180 degrees, we get half an “Apple Surface”, a hemisphere, or half a “Lemon Surface”. These three surfaces intersect to determine the possible focal points.

From this focal point, a line is constructed perpendicular to the plane defined by the vanishing triangle. This intersection is the “Principal Point.” The focal point marks the camera’s location, aimed at the principal point. At this stage, the vanishing triangle and image are transformed into the tetrahedron with the camera, aligning the model and image anchor points with the camera location, thus completing the match.

Alternative Definition

I created two versions: one with three sets of control lines and one with two sets of control lines plus an adjustable principal point. The second setup reduces inaccuracies by keeping the principal point centered in images directly from the camera, which is ideal for minimizing alignment errors. This alternative includes an additional solver and runs two separate solutions, which could be optimized, though I haven’t had time to refine this yet.

Components and Known Issues

This definition uses the Gumball component for interactivity. However, for each new match, the component must be deleted, a new instance created, and then reconnected to the definition to ensure it functions correctly. If anyone has a more robust alternative, I’d be grateful to hear about it, as Gumball’s interactivity is crucial in this setup, and using only sliders isn’t practical.

Additionally, the View Image component from Fennec is occasionally unstable, causing rotations or reflections in the image. This can be resolved by editing the image in a program like Photoshop or GIMP, though it is inconvenient. I’m not aware of any stable alternatives.

Final Remarks

While these are complex definitions, keep in mind that I am a beginner with about six months of experience in Rhino and Grasshopper. If something is not fully up to standard, it’s likely due to my inexperience. This project is a personal hobby, and I hope sharing it can be of use to others. I welcome questions and suggestions. I also have ideas for two-point and one-point perspective matching, which would use a completely different approach and require more development time.

Required Plugins:

  • froGH
  • Fennec
  • Javid
  • Bitmap
  • Gumball

For clarity, the definition’s input is placed on the left side of the layout. Please follow the steps from top to bottom.

Attachments: Two Grasshopper definitions are included.
Three Point Perspective Match 3ctrl Ed_2001.gh (589.0 KB)
Three Point Perspective Match 2ctrl_principal_point Ed_2001.gh (581.4 KB)

5 Likes

Hi
Can you show an example about this problem?

In my first post I posted a definition that I want change for a new one, because there is something wrong with it, But I can’t edit my first post.

It’s the definition with three pairs of control lines. I had a Boolean on true (the perspective camera), that must be false when starting this definition:

Again:
Three Point Perspective Match 3ctrl Ed_2001.gh (586.2 KB)

Yes:

I cropped this image:

perspective matched:

So I edit the image with GIMP:

and send that to camera:

The results of ‘View Image’ are really weird sometimes. But without this component, this definition was not possible.

With this crop I wanted to test with a big camerashift. That is handled well. That’s why the image off-center. But the principal point is in the center of the viewport.That’s how it should be.

The image loaded as it is , in which part of the script you get this wrong orientation?

The script is good. It is the component that sometimes does not handle an image well when the image is at a certain angle (in 3D).

My second definition was also posted with the boolean for the perspective camera on true, so also a new upload:

Three Point Perspective Match 2ctrl_principal_point Ed_2001.gh (579.7 KB)

You changed the input plane and the image looks rotated?

Or in your script you orient the imported image to a new boundary?

Please look at my first image in the first post, you can see in the Front and Right Viewport that the image is placed in front of the camera.

I checked the other view image component.
You have an issue with the input plane when using Orient; please check and fix it.
The image view is just a tool to display the image based on the input plane and has no effect on the result even you remove it from the script you get the same wrong image orientation.

The orient component does what it is supposed to do, the problem with the ‘View Image’ is not in my script, but in the component itself. Look at the example I give and see that there is a very strange rotation within the image itself.

The orient component puts the plane exactly where it should be.

Just remove it from the script to understand better.
The problem is with orient process, you need to check the input plane and fix X,Y directions

Here is a video of the matching proces with three pairs of contole lines:

2 Likes