Projecting points and calculating distances between them on a sphere surface


Dear colleagues,

I have a research project, maybe you’ll have ideas or knowledge to help me. It is about calculating distances on a sphere surface, similar to calculating navigating routes for planes on the spherical earth map (great circle lines or orthodromes)

I have a sphere with a defined radius. In front of the sphere sits another body, let s say it is a cube. The upper left corner and the lower right corner of the cube’s closest surface project themselves over the sphere surface (point A and point B). By rotating the sphere, I want to move the A into B position. How do I measure the distance between A and B on spherical surface and how do I calculate the needed rotational angle of the sphere to do that? After that I will have more points on the sphere surface and I will need to do some spherical trigonometry for calculating the surfaces (need to define some angles between sphere centrum and the points on the surface)

Any ideas how to do that?

The project is about simulating the human shoulder joint (more exactly the pathological shoulder that luxates) using a ball and socket model with the points on the sphere as anatomical landmarks and the cube as the human glene. :slight_smile: The sphere fits perfectly the articular surface of the humeral head. The center of the sphere is always in line with the center of the cube. Somewhere on that sphere surface it is a bony defect, like a canion. When this defect is coming in contact with the anterior rim of the glenoid (the margin of the cube in my example), makes the humeral head (sphere) slip anteriorly (the center of the sphere is not anymore in line with the center of the cube).

I want to see how some anatomical landmarks (points on a sphere) come in contact with the margin of the socket.

To make the problem more dificult, the socket will not be a cube in the future, but a light concave surface with an eliptical form, as the human glenoid is.

For the beginning i need to learn how to draw other body’s projections points and lines on the sphere surface and then how to draw and measure the angles made by these points with the sphere centrum. That would help a lot!

Here are some other pictures of the anatomical model:



(David Cockey) #2

Project and Pull commands, described in Rhino help.

(David Cockey) #3

Create lines from the sphere centrum to each point. Then Angle with the TwoObjects option to measure the angle between the [quote=“Andrei_Popescu, post:1, topic:44080”]
How do I measure the distance between A and B on spherical surface
[/quote] For the sphere use ShortPath to create the curve on the surface which has the shortest path between the points. Then Length for the length of the curve.

How will you define the distance between points on a non-spherical surface? Shortest distances? Or distance along a plane which passes through the points and the centroid of the surface? For non-spherical surfaces they are usually not the same. (For the latter use Plane with the 3Point option, ExtendSrf the plane, Intersect the plane and the surface for the planar line.) Or some other way of defining the distance?



I’m not sure if I understood your question right, but would something like this be what you are looking for - see the yellow “ray” pointing while the joint is moving (could of course point in any direction to start with).

See the clip below and watch the “triangle” moving over the joint surface (which is NOT perfectly spherical(circular), this surface is was round and then flattened some, as to look more like a natural joint. It could have any shape (like from your mesh, no problem). The three-point object is then “floating” on that surface by means of three individual points that stay on the surface (on any topology really). And the ray is then attached to the group of three as to point in Normal direction (from the triangle triangulation).

The “ray” then then in turn could be “intersected” with any other object or surface at any point in time, and that would result in a point in space (the intersection between the two). No manual calculation needed to make this example.

// Rolf


Hello guys, thanks for helping!

I want to project the socket of the joint on the surface of the sphere and then simulate how the shoulder dislocates, like in this 30 sec youtube video :, in order to understand where the patient’s problem is. It could be the bone lost in front of the socket (called Bankart lesion) or the big bony hole in the humeral head (called Hill Sachs), or both. There is a complicated surgery to do, one fixates a bone block in front of the socket (glenoid) in order to increase the socket surface, thus interrupting the luxation mechanism. But sometimes, the defect on the sphere is so big…that luxation appears even after this type of surgery.

When I will be able to reproduce this in 3D, I will use patient s computer tomography to import in Rhino, as stl files…then calculate the dimensions of the articular side of the humerus (part of a sphere with exact calculation of the radius), the bony defects location and size and the dimensions of the glenoid (the socket). The 2 joint components differ from patients to patient, always other numbers/results…

Having the numbers before surgery I could better decide what I need to do: increase the socket surface with x mm and that s it…or increase the socket size and fill the hole in the sphere. 2 different surgeries with too many complications…and the poor patient has only one chance before loosing important shoulder functionality.

So…yes, I am a surgeon trying to improve my diagnostic/treatment indication algorithm with 3D modelling :slight_smile:. It’s a pity they didn’t taught us 3D stuff in medical school



Hi @Andrei_Popescu

Is this what you mean with “increase the socket surface”? (the red part, or does “socket” refer to the contact surface of the “ball joint” on the arm?)

Fig 1.

Regarding the “bony hole in the humeral head”, do you mean the red area (“B”) in Fig. 2? :

Fig 2.

In any case, it should be no problem to determine the curvature of any of the surfaces given a mesh of the real thing.

There are experts here on the forum who can tell what workflow would be the best in converting the mesh to mathematically exact NURBS surfaces (the surfaces pictured).

Given such a reconstructed sphere, you can easily determine the radius by Osnapping a point to the (sphere) center of that surface. The you would have the radius one click away.

Perhaps even mesh tools can determine the center directly from the mesh, and thus the radius? (I’, only a Rhino user with NURBS surfaces).

You may also want to figure out how much to extend part of the edges of that spherical surface (to the sides) , in specific directions. (as you notice, here I only talk CAD lingo, not fluent in bio-lingo… :slight_smile: )

Sometimes a simple hand-sketch (scan from paper och take a photo with your mobile and upload) can help people to better understand what you want.

In any case, hang on until you get your answers, people here are amazing in finding solutions.

// Rolf


It’s incredible what you can do with Rhino. Thanks for enlightening me!

I will try to explain shortly the surgery and the anatomy so that you better understand what I am looking for.

In Fig1 and Fig 2 is the socket of the human shoulder. It is light concave and small in comparison with the humerus.

Fig 1

Fig 2

This patient had a traumatic shoulder dislocation. Fig 3 (viewed from transversal section) explains what happens. The humeral head suffers a defect and the anterior margin of the socket looses his margin. That s why the shoulder keeps dislocating.


The surgery I do consists in fixating a bone block in front of the socket so that the defect on the sphere never comes in front of the socket, like in the youtube video. Fig 4 shows the postoperative socket that has a greater surface.

Fig 4

The humeral head seen as a sphere (Fig 5) is centered with the center of the socket and sits in front of it being kept in place by muscles, tendons and joint capsule. The humeral head rotates on a vertical axis from -40 to around +90 degrees and inclines his axis maximum 90 degrees: Fig 7, Fig 6

Fig 5

Fig 6

Fig 7

Fig 8: the bony defect on the posterior superior side of the humerus (sphere)

I hope now is ok for you to understand the details of the problem.

  1. The socket creates a track, or a path over the articular side of the humerus (the spherical part) with every movement of the arm.
  2. the ball center is aligned with the socket center
  3. the ball rotates in every direction keeping the centres aligned but limited by nature to the natural range of motion (External rotation 90 degrees, internal rotation to about -40 and axis inclination from 0 to 90 degrees)
  4. the sphere revolutions are combined, for example…external rotation and axis inclination
  5. when the movement goes to maximum, external rotation close to 90° and inclination over 60°, the bony defect on the superior and posterior part of the sphere comes in contact with the anterior rim of the socket -> the shoulder dislocates again. there are patients who have recidivant dislocations, a bad condition difficult to treat
  6. we do the surgery and we increase the surface of the socket, like in Fig.4, and the humeral bony defect doesn’t come again in contact with the anterior socket margin
  7. the maximum possible size of the bone block is somewhere between 1/4 and 1/3 of the socket size

Research questions: at what arm position does the shoulder dislocates (bony defect engages the anterior rim of the socket at external rotation x°, inclination y°)? how big should be the bone block in front of the socket in order to avoid re-dislocation? If the bone block needs to be greater than 1/3 of the anterior-posterior socket diameter…would be better to additionally fill the bony defect on the sphere?

If we develop a workflow for doing the calculations and answering the research questions, I could use it on several hundred CT scans in order to clinical validate it.

I asked you guys because no surgeon managed to answer these questions…but the truth is in front of our noses.

We won’t get a Nobel, but we will manage to solve one of the biggest troubles in shoulder surgery today.



Thank you for the above description. Your goal as described is absolutely possible to achieve, starting from your scanned meshes. No problem at all as far as I can see (I’m now only staying away from the mesh part, but others on the forum will fill in here explaining the workflow).

I would suggest that you graphically define the geometrical “hinges” (with approcimate position, and direction) for each of the two parts here - one set for the ball joint (humerus sphere), and one set for the shoulder’s socket. :

DEFINE THIS (graphically):

§1. Shoulder Socket:

  • Radius.
    Perhaps only a point, which represents the (sphere) center of the curvature of the Socket Surface. This point is then to be exactly determined by the CAD software, with the actual Radius value. We can call this parameter for SocRcc (anatomic plus geometric hints in the name)
  • Direction Z : Place a point on in upper edge of the shoulder socket to define at which point you would prefer to view as “12 a clock”, or Z-axis direction. (That point should be aligned with the SocR0 in the X-axis direction). Parameter name SocZ

§2. Ball (Humerus sphere) :

  • Sphere Center : The radius distance from the ball’s contact surface is directly derived from §1 above (SphRcc = SocRcc). Also define the set of rules for manually positioning of the rotation center point by defining locations on the bone that can be easily accessed during operation. That probably takes three points, Sph1, Sph2, Sph3 (given that the Rotation Center is exactly the same for both X and Y rotations)

  • Direction Z : This will be the reference point for specifying where the “grove” is located, (specify reference points on the bone). SphZ

  • Defect Rotation Angle X° : As you described above: The external rotation angle in which the defect hits the socket rim. Start and End value, that is, when the defect is “entered” and where the joint gets “locked”, if that gives 1/3 or 1/4 size for the added piece of bone. DefXs, DefXe. (in degrees)

  • Defect Rotation Angle Y° : As you described above: The external rotation angle in which the defect hits the socket rim (Start and End value, and perhaps also a “Mid”-value. These would define at which angle the defect is “entered” (Start), when the joint gets “locked” (Mid?), and End, like Start but entering the defect from the opposite direction. Parameters: DefYs, DefYm, DefYe… (in degrees).

The location of these parameters should be illustrated to check if they are sufficient. After that the CAD-workflow for calculating the values could be worked out.

I imagine that these parameters then should be placed in a template CAD model, into which the actual mesh (or the converted NURB surfaces) would be imported. Manual work flow would be something like :

  1. Open up the predefined CAD template.
  • Import the mesh (or the NURBS converted mesh).
  • Place the template parameter points or lines (with the proper names) on the imported surfaces, by snapping them in place using std Rhino commands (OSnap would probably suffice).
  • Run an algorithm to perform any calculations and automagic drawing in the model (perhaps placing symbolic model elements in the right places so that a print of the model would show clearly all the numbers, locations, and directions. All the magic stuff to be calculated would be done with Grasshopper, or via a Script (VB or Python).

// Rolf