I am witnessing some odd behaviour with Galapagos performance. Please tell me if you experience this yourselves.
I am comparing running the same model in two machines.
Running solutions one-by-one, the fastest machine is 5x to 10x faster than the other, let’s say 75ms to 500ms runtime.
The fastest machine is 12C/24T while the slowest is an older 4C/8T, so far so good right?
The not so good part is when using Galapagos. Both machines appear to have more or less the same runtime for each solution, around an average of 600ms.
Some overhead was expected, but I also expected some difference between both machines. My components are doing intensive loops in parallel and that shows when running solutions one-by-one, but not in Galapagos.
It is possible/likely that Galapagos is not parallelized. In that case it runs on one core/thread.
See also Amdahl's law - Wikipedia
I am 99,999% certain Galapagos is not in any way parallelized, but it is calling parallelized methods in the underlying components. My guess is that Galapagos is also not using cached data when the model runs so it calculates everything from start to finish over and over again.
Still, I would expect that the improvements made to parallelized code running on more threads would show up during runtime. If not by 10x, at least 2x…
Following up: looks like the cpu isn’t being taxed at all for very short solution times (under 1s). Only for solution times above 1,5s, 2s I start seeing something like 30% cpu resources being used, while on the slowest machine cpu taxation is always higher. Weird… It’s like the software is capping performance to a predetermined level.