What algorithm does kangaroo use to within its solver to create the converged shapes?

Is there a manual with the workings and proof behind the script?

What algorithm does kangaroo use to within its solver to create the converged shapes?

Is there a manual with the workings and proof behind the script?

Only Daniel can answer that … but K is a particle physics engine meaning that you can Google and enjoy.

Kangaroo works by minimising total energy.

The various * Goals* define different energies which are zero under certain geometric conditions.

The Kangaroo solver iteratively moves the points so that the sum of the energies acting on all the points in the system is as low as possible.

For example, the **Length** goal acts as a spring, following Hooke’s law, so the energy is zero when it is at its rest length, but increases if it is stretched or compressed (in fact the energy increases as the square of the distance by which it is deformed).

The other goals define energies based on the geometric relations between the set of points they act on. Some of them (such as Length, Angle, Pressure) are based on physical elastic behaviour in a way that their strength can be related precisely to standard material properties and units.

Other goals are purely geometric in nature - they become zero when some condition such as circle tangency or quad planarity is satisfied, and can be used for finding geometry which satisfies some fabrication constraints.

Other goals are physically based, but not *yet* in a calibrated way, such as the **Hinge** goal for shell bending, so they can model physical behaviour qualitatively, but not in a way where numerical material properties can be so directly applied (though hopefully in a future version this will change).

When the goals do not conflict (for example starting from an arbitrary quadrilateral and making the angles all 90° and the lengths equal), this can work as a geometric constraint solver by making all the energies zero.

However, for something like finding the deflection of a hanging cable, the length and load goals will not all be satisfied - since one resists the other, so here the solver finds the configuration with the least total potential energy, and if the strengths are set correctly this can be a numerically accurate model of elastic deformation.

Sometimes you might not even want to reach a single energy minimum, but to make something dynamic like a flag flapping in the wind.

Typically these have been seen as separate things (constraint solving, modelling structural deformation, dynamic animation), needing their own specific approaches, but in Kangaroo they are all tackled with one energy minimisation approach.

Hence why I called them ‘goals’ as a more general term encompassing constraints, applied loads and elastic resistance.

The actual algorithm the Kangaroo solver uses to perform this minimisation can be seen as a form of * dynamic relaxation* (DR). This is a name coined by the engineer Alistair Day, to describe a technique where equilibrium is found by combining all the forces acting on each point, and repeatedly moving all the points by small steps until the forces balance and the movement stops. This happens in a dynamic fashion, using momentum, so the movement will oscillate about the equilibrium, and damping is used to remove energy and ensure convergence.

The writings of the late Mike Barnes are a good place to read more about DR, as he was a pioneer in developing its applications to form-finding, particularly for tensile structures.

In the typical engineering applications of dynamic relaxation, the desired output is just the static equilibrium configuration, so the damping and mass values are chosen just for stability and convergence rather than based on real physical values (note that this does not affect the accuracy of the equilibrium solution, only the route and number of steps taken to get there). Hence it is sometimes referred to as pseudo-dynamics.

* However*, (and this is sometimes a cause of confusion), the typical form of DR is essentially identical to a common approach to time integration used in physics engines for games and animation (a popular reference for which is the set of Siggraph course notes from Baraff and Witkin). If you adjust the mode and values for damping and mass appropriately, the same solver can be used for both DR form-finding, and animations of dynamics.

Since version 2, Kangaroo doesn’t actually use the classical form of DR, but a new form where instead of summing accelerations, it combines projections onto the zero energy state of each goal. This draws on conversations I had with the LGG group at EPFL, during the development of their ‘Projective Dynamics’ paper (though the method used in Kangaroo is not exactly the same as the one described in that paper, and incorporates a different approach to the damping to accelerate convergence). I later also learned that this method of combining projections actually dates back much further than the LGG work, and can be seen as a form of the alternating direction method of multipliers (ADMM).

I’ve had a mostly written paper on the solver method developed for Kangaroo sitting on my desktop for years, one day I’ll actually finish and publish it.

I hope the above helps clarify things a little more in the meantime.

60 Likes

Hi Daniel,

Thanks for the clarification. I’ll have a look into Mark Barnes to read up on Dynamic Relaxation.

Best of luck with the paper, I look forward to reading it.

Kind regards,

Will

*Michael* Barnes.

This one might be a good place to start:

https://researchportal.bath.ac.uk/en/publications/form-finding-and-analysis-of-tension-structures-by-dynamic-relaxa

edit- realised that link isn’t an open access one. Here’s an alternative:

http://openaccess.city.ac.uk/id/eprint/11887/

3 Likes

Hi,

Recently I had the same question. Therefore I implemented Dynamic-Relaxation in Python. It only contains a small subset of the Kangaroo goals (only springs and anchors) but it helps to understand the principles of DR. Other goals can be implemented straightforward. The results are perfectly matching with Kangaroo.

Check the comments within `kangaroo.py`

for the mechanical background.

Best

Thomas

4 Likes

Thats nive but usually the problem with kangaroo is that there is no easy way to debug custom kangaroo goals visually to tack whats happening.

Do you have any ideas how that can be implemented?

Hi @ankere

Have you tried with this? (to do individual steps with custom goals and see the contributions for each point)

1 Like

@ankere For step-by-step debugging in VS: maybe create a small plugin containing the solver which includes the `KangarooSolver.dll`

.

@DanielPiker Very cool! I am playing around with custom goals. The force is computed by the derivative of energy functionals. The lumped mass is also straightforward. Is there a recommended way to convert them to `Move`

+ `Weighting`

?

Another helpful thing for developing goals can be to make a little script to directly output the Move vectors per point, without even making a simulation and stepping it. I’ll try and post an example of this.

@oberbichler Thanks for the Python DR example - nice work!

As for converting things written in the classic force-based or energy-gradient approach to goals with Move & Weighting -

In general, Move should be the vectors which take the set of particles to their closest zero energy state, and when the Move multiplied by the Weighting per particle gives the same vector as the force used in the classic acceleration based DR form, it should converge to the same thing.

In the force-based approach, high stiffness means increasing the length of the acceleration vectors, and if the timestep isn’t scaled down accordingly things can get unstable because these can overshoot, as would happen in Kangaroo1, particularly when mixing elements of very different stiffness. You can always overcome this if you make the timestep small enough, but sometimes this means making it so tiny things slow to a crawl.

The big advantage of splitting it into a vector Move and a scalar Weighting is that instead of a single vector with importance controlled just by its length, it separates *where the point wants to go* from *how much it wants to go there*. So even as the weighting of one goal goes towards infinity, it doesn’t make the system unstable, it just means that goal will completely override any competing ones.

2 Likes

Thanks! And thanks for the additional explanation!

Move should be the vectors which take the set of particles to their closest zero energy state, and when the Move multiplied by the Weighting per particle gives the same vector as the force

I usually implement the energy functional and compute the gradient which corresponds to the residual force. The direction of the negative gradient points to the closest energy state.

If `force = move * weight`

then I assume that `move = force / weight`

. I am just not sure why I divide the force by the weight when it is multiplied by the weight afterwards(?)

it separates

where the point wants to gofrom *how much it wants to go there

Usually, I use a Newton algorithm for the solution which requires the second derivative (= stiffness) to determine a step-size (btw: there is a nice paper about the relation Newton/DR). As far as I understand the weights have a similar role with the difference that there is only one value per node and therefore no matrices are required(?)

I am still not sure how to compute a reliable weight without modifying the mechanical behavior of the system. When I find some time, I will post an example.

@DanielPiker Sorry for late reply but I had to prepare a defence presentation. Thanks for sharing the debugger script. I have also tried it before but will check it out again.

@oberbichler Thats a nice idea indeed. Thanks!

Hi,

I found some time to prepare a minimal example:

A system of two elastic non-linear truss elements. The location of the left and the right node are fixed. The node in between can move in all directions. With the knob I can adjust the undeformed length = target length of the members. The black geometry is the initial setup, the red geometry the solution of kangaroo, and the gray geometry is the reference solution denoted by the intersection of two circles.

Seems to work. The custom goal for the nonlinear truss uses the energy functional:

```
# inner energy
pi = (0.5 * self.youngs_modulus * eps + self.prestress) * eps * self.area * self.ref_length / weight
```

My question @DanielPiker: I divide the energy (`pi`

) by the weight. Reason: Kangaroo multiplies the move vector with weight. Therefore, there might be inconsistencies when I add several energy functionals together. Am I wrong? Why does Kangaroo multiply the move vector with the weight if it can be done on element level?

Best

Thomas

kangaroo-custom-goal.gh (30.6 KB)

Hi Thomas,

Sorry for not replying sooner to your last post.

This does need to happen in the solver per point (acted on by multiple goals) to get the stability benefits.

For each point, we have, per goal acting on it, a scalar weighting and a move vector.

The solver sums up the move * weighting for all of these goals to get a single vector, it also separately sums up all the weightings to get a single scalar WeightSum.

*It then divides the (move * weighting) sum vector by this WeightSum.*

This gives a new combined target position for this point which is guaranteed to always lie inside the convex hull of the per-goal targets for that point (since effectively it makes the weights sum to 1). This is what makes it so much more stable than the force/acceleration based approach.

This weighted average minimises the sum of the weighted squares of the distances to the different per-goal target points, which is locally the energy minimisation we want. Iterating this we can also minimise this total for the system globally - i.e. reach equilibrium.

Within the Calculate method of a goal, it doesn’t know what other goals are going to be acting on the same points, so the division step has to happen at the solver level.

(it doesn’t necessarily have to be done every iteration - if you only ever wanted weightings to stay fixed throughout a simulation you could cache it, but as it’s not a costly operation I chose to do it each iteration which allows the option of goals which dynamically adjust their weighting, such as ConstantTension)

1 Like

Hi,

don’t worry. Thanks for your answer! Let me clarify: The division by the mass-sum (= weight-sum) is clear to me. It corresponds to the inverse mass matrix.

I wonder why Kangaroo multiplies the move vector by the mass. I would expect the following:

```
for goal in goals:
goal.compute()
for j, particle in enumerate(goal.particles):
particle.force += goal.force[j]
particle.mass += goal.lumped_mass[j]
```

But Kangaroo computes:

```
for goal in goals:
goal.compute()
for j, particle in enumerate(goal.particles):
particle.force += goal.force[j] * goal.lumped_mass[j] # << !!
particle.mass += goal.lumped_mass[j]
```

Both variants are equivalent. In Kangaroo I only must divide the residual force of the Goal by the mass.

In fact, this is what the spring element does. This

```
def compute(self):
particle_a, particle_b = self.particles
v = particle_b.position - particle_a.position
rest_length = self.rest_length
actual_length = norm(v)
delta = actual_length - rest_length
direction = v / actual_length
self.force[0] = delta * self.stiffness * direction
self.force[1] = -self.force[0]
self.lumped_mass = np.array([self.stiffness, self.stiffness])
```

in Kangaroo becomes

```
def compute(self):
...
self.force[0] = delta * direction
self.force[1] = -self.force[0]
self.lumped_mass = np.array([self.stiffness, self.stiffness])
```

and produces the same result because Kangaroo multiplies the force (or move-vector) with the mass within the solver. Therefore, in Python I get the same results in each iteration.

It gets a little tricky when I don’t want to use stiffness as mass to better control convergence without changing the mechanical behavior (force). In the example (last post), I simply divide the force by the selected mass within the Goal. The classical implementation would save the division within the Goal + the subsequent multiplication within the solver. Do I miss something?

That’s not what Kangaroo does. The Weighting is not the lumped mass.

Your python code isn’t the same as the method used in Kangaroo2 - it’s more like the solver in Kangaroo1. They should all converge to the same thing (provided you use appropriately small timesteps in the force based versions), since you can use different methods to find the same minimum, but if you try making one spring a million times stiffer than the others the difference becomes apparent.

I’ll try and come up with a code example to make it clearer.

@DanielPiker Sorry and thanks for the clarification! I was comparing the results using the following setup in Rhino 7:

Which solver/goals should I use to understand the new solver?

Edit: I just double-checked the results for this example, and they match exactly with Kangaroo (final and intermediate results after 10, 20, … iterations). Also, when making one spring a million times stiffer. Might I trigger a corner case?

Hey Daniel,

I have two questions, please:

1- In the early stages of Kangaroo, you published an article " Kangaroo: Form Finding with Computational Physics" saying that “Simulation in Kangaroo is nonlinear…” I want to ask about what did you mean about that? What kind on non-linearity are you referring to?

2- Is it safe to say that Kangaroo’s algorithm is DR? I know here you talked about energy minimization, but during form-finding, does it behave like DR? what I mean is the user sets a mesh with edges and nodes. Does Kangaroo then assumes masses and damping and fines equilibrium with given loads and edgeLength? Does Kangaroo uses the same equations described in Barnes 1977 dissertation?

Appreciate your comments.

Hazhi