Another big problem is that often one change “changes the world” in such a way that each of those subsequent changes in the "world"must be dealt with sequentially, otherwise the final result will drift off into… the unknown. That is on top of the overhead which comes “inherently” with locks and swapping context in memory etc.
Concurrency is still to be considered “black magic”. And as it is with black magic and other kinds of magic, people tend to believe in it, just because.
But single threaded can be very very fast, if well designed.
And if talking “business logic” (not renders and the alike) made with regular imperative programming, there’s very seldom any good reason for complicating such systems with concurrency due to the fact that you often “change the world” with almost every command.
WHERE MULTIPLE CPU’s MAKES SENSE
But there are other concepts than today’s regular use of imperative languages are needed to simplify concurrency (where it applies). One example of a different concept is FBP (Flow Based Programming), which is a different way of programming altogether, compared to how we use the std imperative languages today.
Due to its approach FBP is “inherently concurrent” in that there’s no code that entangles itself with locks and stuff. It is concurrent “by wiring”, not by special code lines.
Grasshopper is a kind of a FBP thingy. At least it reminds of the FBP approach. Think of pure FBP as a manufacturing plant with machines and pathways (“wires”)
between them. Such machines would resemble “components” (the actual logic) in Grasshopper. The rest is just data “flowing” between the components. When in need of more capacity in a “production line” in a manufacturing plant, they don’t start designing “locks” and “mutexes” and make plans for how to avoid “dead locks” and “data races” and that kind of artificial stuff, instead they add machines in parallell and… done.
In such FBP-kind of systems multiple CPUs, GPUs (and manufacturing machines) really shines.
In effect, manufacturing plants, as well as FBP, are inherently concurrent, it’s not anything you add to the concept.
THEY ALREADY HAD IT BUT LOST IT
FBP was, btw, developed into an industrial strength programming approach by J. Paul Morrison ( http://www.jpaulmorrison.com/fbp/ ) in the late sixties, and as an IBM employee he built a (huge) banking system in Canada in 1974, a system which has been up running for over forty (40) years (parts of it is still very much up running). FBP systems are robust, inherently concurrent, and simple to configure and update, you reconfigure the components and off you go, and when more capacity is needed, you add more components in parallel (more “machines”), where it makes sense that is.
(Although it really doesn’t make sense as often as many would believe though, due to the “changing the world”-effect mentioned above, but FBP has many other benefits as well).
JPaul Morrison is still going strong (last I checked) an he’s over 80 years old today (watch a video clip with him on that linked page) and is still doing consultant work. He’s experiencing an emerging interest in the FBP concept, a concept which predated today’s inherently over-complex languages which by design are “inherently concurrency resistant”. …
But the imperative pack is fighting back. Google for example, decided to develop a new (imperative) language with better concurrency by design (designers Ken Thompson and Rob Spike et al), the language is called Golang (Google Language, short form; “Go”). Also the reknown Swedish Telecom company Ericsson has developed a language specialized for concurrency - Erlang (“Ericsson Languge”). But the “changing the world” effect is still very much present in so many cases. And the fact of the matter is that the number of available CPU’s, or GPUs, isn’t going to change that reality anytime soon.