Does Autodesk work on grasshopper alternative?

Hi guys.

I heard that autodesk is working on grasshopper alternative and will be part of alias.

It is correct or its hoax ?

  • peter

Search for dynamo or dynamo studio. Revit and 3d max already connect to dynamo. I expect more programs will follow.

node based 3d-editing-programming has been there for a long time already, even long before Grasshopper. first i ever saw something comparably powerful like that was in C4D maybe 15 years ago or maybe longer named Xpresso.

i saw similar and probably also not to that extend coming up slowly in other applications far later, but i am not sure if C4D was now a pioneer in that or not, info about that is also quite thin on the net as far as i could see.

it seems this is a preferable way to go for the future, many will be humping that train soon enough.

1 Like

The train seems to be a bit slow I’d say. Let me explain.

What we see in Grasshopper and the alike is actually a concept called FBP (Flow Based Programming) developed for industrial strength applications in the sixties, and was used in a major Canadian Bank system in the early seventies and major parts of that system stayed up running for over 40 years since.

INHERENTLY CONCURRENT
One of the main advantages with FBP is - if having correctly designed the separation between the components (processes) and the the logical network (the orchestration) - is concurrency. FBP is “inherently concurrent”, just as a machining shop is inherently concurrent - just place two or more machines in parallel and off you go. No data-races and no deadlocks. By implication.

Not so with our “modern” über complex imperative languages. The main disadvantage with today’s over complex programming languages is, well, over complex, and inherently non-concurrent.

Look at this guy, J.P Morrison, a former IBM guy and the inventor of FBP

NEW (OLD) PARADIGMA
J.Paul Morrison is over 80 yrs old today (I think) and still doing consultancy on his brain child FBP. Also read his book (first edition free online, although I recommend the second edition) http://www.jpaulmorrison.com/fbp/.

The basic concept of FBP it’s just too smart (and thus too simple for the taste of many of today’s believers in über complex languages). And as a result of the lack of research in truly concurrent paradigmas in recent decades (in part due to the OOP paradigm, which btw is OK for writing individual components for use in FBP networks, but not for writing concurrent code), the whole industry is groaning today over the lack of “languages that supports concurrency” (some would call it parallel computing), which is said to be the only way forwards utilizing our multiple CPU cores.

But, the basic problem of practically useful approaches to concurrency (parallel computing) was already developed, in the sixties(!), and fine tuned and used in commercial applications in the beginning of the seventies. And the overall design of systems looked like… Grasshopper!

I saw an early hand drawn model by Mr J.P Morrison and it looked like… an “early version” of Grasshopper, drawn 40+ years ago. :slight_smile:

With today’s languages you fight locks and mutexes and end up with extremely over complex logic trying to avoid what you will end up with anyway - dead-locks or data races, or both. In FBP that’s not anything you think about very much (if the FBP system is properly designed, that is).

The man is old, yes, but he’s on Github, and there’s a Javascript version on his Github page, and a Java version, and a C# version and a C++ version and …

Chat with him on Google Groups. He has some experiences to share… :slight_smile:
https://groups.google.com/forum/#!forum/flow-based-programming

// Rolf

Although this is superficially true, I find that mostly bottlenecks in algorithms tend to appear consecutively, not in parallel. As such, the fact that you can compute two parallel streams of operations concurrently makes little difference because one of those streams is going to take 15 milliseconds and the other 4.8 seconds. It’s a theoretical curiosity of little practical value.

As long as the operations pipeline doesn’t involve any aggregation of partial data along the way, then yes, concurrency can be said to come fairly natural to a node-based algorithm. But use a Shift List, or List Length, or any operation which requires adjacent data anywhere along the way and you have to deal with race-conditions all over again.

Each individual node can be made to run concurrent assuming it operates on more than one input. Grasshopper 1.0 doesn’t do this, Grasshopper 2.0 will. I hesitate however to call this sort of concurrency “inherent”. It’s just that many operations, when looped over arrays, can be distributed to multiple cores. F# (I imagine) and Plinq do this because they take a very operation-centric approach. C# has alternative loop formulations which accomplish this as well (clunkily, I agree).

I absolutely agree that C# is not a concurrency friendly language, I’m just sceptical about the inherent awesomeness of FBP in real life.

Well, I agree that if you narrow in on “algorithm” then you’re (most often) better off processing it sequentially (simply because most often you must calc “a” before doing “b” anyway).

It’s about where to draw the line, so to speak. Where to split the job.

If one process renders one tire while another component (read; process) renders the other tire, then it’s like in the machining shop - two machines (processes) do the job of drawing two tires faster than one machine (process).

On the “user level” (but very often also on the developer’s level) one can view the “Job list” on the abstraction level of “real objects” and, remark, not bothering about concurrency issues, since it’s as simple as wiring up more components to do the job.

Having said this, the concept of parallel computing does indeed promise more than it delivers in many cases. Many jobs can be speeded up just as well by smart handling of the items in a sequence (or, consecutive items that are actually waiting for other “consecutive” items to finish instead of dealing with the showstoppers first, even reorganizing them on the fly, and so on).

So all in all, drawing the line in the right place, FBP is inherently concurrent for the same reasons as you put assembly lines or milling machines in parallel on the floor of the machining shop.

What you don’t do, though, is to make half the piece (“algorithm”) in one (identical) machine and the other half in the other machine. It’s most often faster to finish what you started (the algorithm) in the first machine. The same logic often goes for algorithms. Which is why you most often isolate an algorithm in a component, just like you’d do in Grasshopper (as you would in any FBP application).

Now, using Clusters (sub nets in FBP) you just bundle more related algorithms into bigger ones, and so on. But you don’t worry much about locks and data races using the FBP approach. I say approach, because FBP is an approach rather than a language (which is why FBP is implemented in many languages).

Me myself designed one of the world’s leading edge logistics business systems, but I can’t see how the basic design of such a (remark, business) system could benefit from the FBP approach, because most user actions “changes the world” in a way that the business logic must regard. But many subsystems definitely would benefit a lot. But in a software like Rhino the potential usefulness is so much bigger, although not on the algorithm level (agreed).

// Rolf