Limit to goal / point count for Kangaroo Solver

I am working on a simulation in Kangaroo that includes very many goals and points. Hundreds of thousands, in some cases. I am noticing that with increasing size, the solver seems to exponentially get slower to initialise. While the simulation still is running at an acceptable speed, after a limit which I have not quite figured out, it comes to a point where an increase in goal count by about 1.5 times, makes the resetting of the component go from 2 seconds to 10 seconds. Another increase by 1.5 times made it go from 10 seconds to tens of minutes until I lost my patience and stopped Rhino.

I have to add that this is behaving somewhat unintuitively. I have a huge mesh of points, and in some cases the solver can take 400 000 goals with no problem. In other cases, 70 000 goals of the same type seem to cause it to crash. It almost seems that the fineness of the point grid is causing the problems rather than the goal count itself, as in the example above: Going from the grid of 400 000 goals to a slightly finer grid causes problems, the 70 000 goal grid was a test where I selected a subset of the larger grid that already caused problems.

I am using only simple, custom line spring, on point and on plane goals output as a list. Could this be related to the way kangaroo solver checks for coincident points using the tolerance setting? I could not fix the problem by changing it.

I understand this is going into the territory where making a custom solver could be feasible, but does anyone know what could be causing the problem with the kangaroo solver, and if it can be easily fixed? Alternatively circumvented by making a custom script with a kangaroo physical system and do something like assigning point indices directly.

Hi @kais.no, did you mean to post a file?
Setting the indexing directly might not be too complex, eg if you already have mesh topology

I cannot openly share it, but DMed you a test version. Is that OK? Feel free to share any insights about the topic at hand here.

I was about to make an example for setting the particle indexing directly in the goal creation script, but then I spotted something else that is likely slowing your definition down a lot that is an easier fix.

When you change the resolution in your script, the points from the previous resolution will still exist in the solver. So when indexing, it is comparing against all the new points and the old ones, so if you change resolution a few times, it can be indexing many millions of points until you press reset (but you can’t press reset until waiting for it to finish that indexing). That fits with the unintuitive behaviour you were seeing with the numbers of points.

If you make sure you always disconnect the goals input and reset before changing the resolution, you can avoid this issue. (with smaller simulations this isn’t usually a problem, as the number of points is often small enough that you can just change the input without disconnecting it then press reset, but when dealing with larger simulations it can become significant)

3 Likes

This works when changing between two large goal sets, but the problem remains when using even bigger goal sets. Despite doing this, connecting the goals with a resolution of 1 takes about a minute on my computer. Doing it with a resolution of 0.5 takes too long to bother. What I need to do lies somewhere in between, and judging from the solver speed at resolution 1 compared to resolution 2, seems doable, if it was not for the time it takes to connect the component. Could you provide or link an example where the indexing is set directly? And how it can output ordered points without using the show component? This could also be useful for cases where points start in the same location.

Here’s an example of providing pre-indexed goals to the solver. It does seem to help.
index_demo.gh (11.7 KB)

Another way of approaching it would be to also initialise the solver in a script, more like this example:

1 Like

Fantastic, thank you!