Scripting N stages optimizations with Galapagos / Other?

Using Rhino5,

Q1: is it possible to setup and run any of the Optimization Solvers (like Galapagos) using C# Script?

Q2: Is there a way to read status of a running optimizer so as to block data downstream until it’s finnished (the idea is to start the next solver after the first i finnished, etc).

I understand that I could combine the 2 or N optimization runs into one fitness objective, but, since each run can be split into very “efficient” optimization runs (small search spaces and few parameters each) I would speed up avoiding searching N^2 combinations when it would suffice with N + N.

But, most important is Q1, to set up and run the simulation from code (I have everything else automated, except for opening the right .3dm file + GH definition file).

// Rolf


Related posts:

1 Like

Q1: No, at least without using user32.dll in order to simulate human input.
Q2: Before hacking this, with your level of coding you should be able to write an simple optimiser by yourself instead.
Basically it is “only” a recursive loop in order to test different parameter setups (genes) for maximum or minimum outcome (fitness)

I believe that @MateuszZwierzycki might have successfully dabbled with this.

Q1: One of my students did it, using reflection (no simulating human input needed), but it’s not easy and a little unstable. Good enough for benchmarking, but probably not good enough for you.

Q2: If you can do Q1, this one’s easy.

Btw, it might well be be that N^2 has better solutions than N+N, but that of course depends.

And yes, you might want to code something for yourself.
You could use our open source optimization plug-in as a starting point.

Well, I’m doing that as well. But I also want to quickly set up repetetive tests to find out the best possible results, for example to verify that my own “optimized optimizations” actually give acceptable results within the search space. Once that is confirmed I can rely on my own super optimized serach spaces (often that involves point clouds, as in the example below).

So I actually make my components with two alternate sets of inputs - for example individual inputs for manipulating axis positions (simX, simY, simZ and so on) with GAs (see Goat below) etc, and also an alternative point cloud input for testing within that limited predefined search pace (based on the insights resulting from tests with GAs manipulating the individual axis inputs).

All the prior test work would be very much simplified and less time consuming if I could automate those optimization components with settings and start/stop, and also reading their staus as to be able to shunt (DataDam) the output data to hamper downstream components (huge component networks) from processing while simulation is in process.

// Rolf

Fig 1. Example when my generated test points (yellow markers & a small point cloud in red) are used instead of the optimization component (Goat) fiddling with the slider values (magneta). I first ran Goat to find out the maximum search space, etc.

Interesting. I’ll take a careful look at that. Thanks!

// Rolf

Yep, but it uses an unreleased version of Anemone, which can run any gh definition from a script to evaluate the fitness. I won’t release it till GH2 probably.
@RIL If you’re not afraid of scripting, I would go with accord-framework.net, it is written for the purpose of being used by other people (unlike Galapagos), hence there is some documentation.

2 Likes

Can’t wait for so long, no. :thinking:

Not afraid of scripting here. But my knowledge about the internals of the optimization algorithms is limited.

My scripting is mostly due to need for speeding things up several magnitudes compared to what I’m able to achieve with only std components. So I make most of my components threaded and try to work only with indicies as much as possible as to avoid shoveling data around, aso. I hate long running processes… :slight_smile:

// Rolf

Are you trying to fit in a sphere into a point cloud? You might have a look at regression analysis -> Least Square

That’s only one of the things I do. Sphere’s are fairly easy to fit though, but doing fitting very fast, on any mesh, any form (< 200 ms) is trickier.

I have very noisy meshes (full of specs inside the surface, not so perfect as on the imige in the earlier post, that’s just a lab-toy) so I have developed my own strategies for fitting of any form on surfaces that deals with the problem of the specks and “bone marrow” inside although doing it fast.

I’ve never used regression analysis for this though, but thank you for the hint. If you know of links to any good intro I’d be looking into that as well. For now I need to get what I have automated though. I’m always open to advice!

// Rolf

Here is an Least Square approach:

Its quite fast:

SphereFitLS.gh (14.6 KB)

1 Like

Very interesting and very fast!

It takes some “pruning” though to get closer to the surface. . But this should be possible to enhance (sphere center and radius ends up about -2 mm off compared to my algorithms, although mine is still > ~2–10 times slower). But very intersting.

Many many thanks @TomTom, I may well find ways to use this.

// Rolf