Ah, I think I spotted the issue! Sorry it took me a while.
In the SHinge goal, at line 38 you have:
PIndex = new int[4];
If you remove this line, I think it will work and allow live updates of the input parameters.
To explain about the PIndex-
When it runs, the solver builds a list of particles without duplicates, and the PIndex array of each goal contains the integer indices identifying the particles that goal acts on.
Normally when a goal is input into the solver from a goal component, this PIndex array is null.
For each goal in turn, the solver creates a new PIndex array then looks at the PPos array of positions, and for each one, searches for a particle at this location and assigns the index of that particle to the corresponding item of the goal’s PIndex array (adding a new particle if one doesn’t already exist there).
However… some goals also have the option to assign the PIndex values directly, before they go into the solver (this isn’t used much, but is potentially useful if you have something like a very large mesh where you know in advance the indexing, and will be updating the parameters frequently during iteration, so want to avoid the speed cost of the searching and indexing in the solver).
(This is all to allow the adding and removal of goals even between iterations, which wasn’t possible in the earliest versions of Kangaroo, which required a reset whenever the set of forces or particles was changed.)
If the PIndex array is not null, the solver assumes it has already been filled deliberately, and doesn’t try to assign it itself. Because your goal had the line initialising the array, it was filled with zeros, so the indexing wasn’t getting applied properly.
Now this is my fault, as I can see now that some of the example goals can be misleading by having that PIndex initialiser there.
The reason is that the components of some of the predefined goals work a bit differently - some actually do store their indexing so that it doesn’t get reassigned whenever, say, the strength parameter is updated. To make this work also when the inputs to the goal are data trees ends up making things a bit more complicated, since the SolveInstance gets called multiple times, so the indexing needs to be stored outside this. I’ve never been entirely happy with the solution I came up with for this.
I’ve found since though that just creating goals new each time SolveInstance is called is actually very rarely a problem, and it keeps the component code much simpler. It does mean that whenever you adjust a parameter like the strength or angle, the indexing gets assigned again, but unless you have tens of thousands of points, the time taken for this is negligible.
I hope this makes things a little clearer.
I’ll try and add some more examples not just of goals, but also of actual goal components.
(by the way, I recognised the name from some of your nice papers. The RoboCut work is very cool!)