I’ve been using grasshopper since Fall 2009. It took a couple of years to really make the most of the tool but have used it pretty steadily for quite a few years on most projects. Every once in a while I check in to see how grasshopper 2 is coming along and each time it seems to be the same. Most of the great plugs seem to have completed development and receive only minor updates every year or so at best. Have those that were the most hardcore into grasshopper moved on to other tools?
It’s now been a decade since the core group of users began using the app. In the office there is the Hand Drawn Generation, the AutoCAD generation, the Revit generation and then the grasshopper generation. I’m beginning to wonder if the next iteration of grasshopper might not be ready until that next generation of new designers and if so, it will remain relevant among what the future of design computation will become during this next generation of technology?
I’m not at all saying that any of this is easy. And I wouldn’t be where I am today if Grasshopper itself would have never existed. But I’m curious, does it really take an unknown amount of years to develop grasshopper 2? Is David Rutten the only person on this project? : \ if it’s a budget issue I’d imagine Mcneel could make more money selling grasshopper than they they currently do with rhino.
Generations are not different per se, it is the context that evolves and defines generational changes. If you divide generations according to their tools (our context here), then the next generation will come when there is a different tool, right?
I have always insisted to David that he should support AI tools as much as possible, not only because of what you ask, but also because I think it has the perfect UI/UX for that. As far as I know, at least he has already included distributions and N dimensional vectors, with more ground like this (embbeddings, regression, convolutions, labelling, matrix operations, etc) I have no doubt that GH2 is not going to be left behind (in terms of enabling innovation) in 10 years. Anyway, I think what we have to wait is to clean up the GH code, and let’s see how it allows to build on top of it, probably much more than what GH1 already allows, so I’m optimistic in that sense, due to trust more than anything else.
He once said that he does it alone and that particular things are helped by his workmates. David puts a lot of love into his interfaces, that’s why they are so nice and take so much time, and a nodal system for modelling has hundreds of edges that are very complex to polish into a seamless product, plus all the native components, plus all the innovation testing, plus the mental exhaustion and pressure to improve GH1… no wonder it takes so much time for one person.
I don’t think selling GH would be very successful because the tendency of people is to pay to make the job easier, not the other way around. It does make more sense to me that Rhino would be more expensive and continue to include GH for free and perhaps release some native functionality as paid plugins to keep it competitively priced. Or include a marketplace for definitions and plugins to get some of the transaction. The point is that GH is still niche software in my opinion, compared to how big the 3d market is. Maybe that’s why they consider one developer to be enough, otherwise I don’t understand why!
I doubt Grasshopper 2 will bring much innovation! The actual limitations we usually see are related to Rhinocommon and Rhino itself, and not to the “GUI”. There is already a generation shift happening… People move on, get bored, or simply disappear. It remains a useful tool, but it lost its ‘hype’ over the time. One issue is that the algorithmic modeling remains a niche and only a few people actually do this as a full time job. I was one of these few, but I left, because a complex job became a “please-fill-this-triangle-with-circles” slavery. Don’t get me wrong, I really love GH and I still do some things with it, but as soon as I started to develop my own tools, I mentally moved on. Because it is challenging me much stronger. I guess I’m not the only one with this experience, and anyone will leave at some point, no matter if the tool improves or not.
Grasshopper is such an amazing tool but I too worry that it might get eclipsed by some new player. I’ve been checking out houdini lately and I fear it would shake up this market. Houdini’s strength seems to be it’s nature of being a Vfx software thus has some advantages in those areas. Also it has some clear advantages to grasshopper regarding the UI. I totally agree that the seeming limitations to grasshopper is rhino itself but I understand that Mcneel’s resources are relatively small . Given this situation how in the world could we expect them to apply AI or deep learning to rhino? And at what level? Is it all prediction algorithms i.e geometry creation and modeling processes? It’s interesting to see where this discussion is headed.
Hi, Ivan. As I’ve said, they’re relatively small. The real comparison as we all know is with autodesk. That’s the benchmark. I don’t claim to know the business side of Mcneel and I’m just giving my personal opinion based on my observations. But I think a lot of people would agree that they’re small in the grand scheme of things. They’re powerful but small. So with this you can assume that they don’t have enough resources to improve upon their product at the rate we users dream of. It’s the pace I’m focusing on. Other signs seem to be that rhino is plagued by lack of features that inferior softwares already have. Also having read the forums, you can find requests and bugs that are 7 yrs old and still to this day are not fixed. I think this is a clear sign of lacking resources. Either this or they’re very incompetent. I don’t believe that is the case. I think your question on who gets the resources misses the point.
it was just a rhetoric question moreless. i think many issues which have not been resolved is due to the fact that they would need to rewrite big portions of core code which is huge task by all means. its no better even if you are gisnt like autodesk, they are bringing new and new features leaving all previous mess untackled. its like universe is expanding faster than speed of light which means the light from flawless software will never reach us hope you get my metaphor
About 5 years ago all the architectural job listings for Grasshopper experts suddenly transformed into BIM managers able to teach a whole office to switch from Rhino/Grasshopper to Revit/Dynamo. Professional modelers in the creative realm moved to Houdini.
Hi Djordje, I’m currently part of a team at Ford, developing a range of software tools to test Electronic control units of various type of cars (Conventional, BEV, xHEV …). So nothing in that domain anymore.
Some of these tools communicate with Matlab and Simulink. Simulink is kind of a Grasshopper equivalent.
Software is one of those industries that always keeps moving, there will always be new tools that rise to popular acceptance and other tools that start their phasing out. So that’s a given.
It’s difficult to accept this question when the software, although being available for a number of years, has only released its 1.x version with Rhino 6. Are people asking if the latest version of Photoshop is still relevant? What about 3DSmax?
The past few years we have seen the boom of visual programming tools, in the node network UI/UX paradigm, especially when it comes to graphical content generation, 3D, multimedia, visual effects.
Grasshopper serves as a platform that easily broadens this spectrum to engineering, science, mathematics, analytical geometry, “live” electronics, etc… so I see only the tip of the iceberg.
I believe Rhino’s paradigm is the one that should be questioned, because, although it is more than capable of delivering for some needs, it quickly becomes limited and dated for other, this is problematic when you consider the quantity, quality and strength of the competition.
It is granted that in the upcoming years we will see a shift towards heavily AI assisted software tools. For me it would be logical that Grasshopper, which already has a foot at the door to deepen and maybe pioneer the new paradigm. The only question for me is if this will happen within the Rhino ecosystem or not.
I don’t see this as granted. BTW, while I write this statement my AI driven autocompletion is driving me crazy…
I indeed see some applications but I still doubt it’s impactfulness on node based scripting editors.
In GH2, maybe the component search bar could indicate what component you want to use next, based on what you are currently doing, but these sort of features are not critical. But isn’t this kind of feature usually meant by AI driven assistance?
I guess you rather mean AI supported designing, but I really don’t see an AI taking over in the near future.
Really I have tinkered around with NN and GA here and there, but I’m really having issues understanding at which point an algorithm based on probability is better than any explicit algorithm?
Of course this can be me, but can you give an example what you would expect GH2 to offer in this regard?
That’s like refusing to use CAD because you can represent everything with a pencil. There are no good answers to bad questions. If a technology broadens or enhances the domain of tasks that a professional performs, it should be understood in those terms, rather than pitted against the previous technology.
I’d like to wait for this, I’m sure some of it will fall in the future, but it’s not a priority of course… and more mathematical operations such as partial derivatives, representation forms such as tensors, graph and agent operations, optimisation techniques, support vector machines, training algorithms, pre-trained recognition or classification networks or based on modelling processes or any other, visualisation techniques, dimensional compression, semantic understanding… all of this can be enormously useful in design.
But none of this will make much sense until McNeel decides to enrich geometry so that it can be understood by machines as objects with real meaning.
Perhaps “goal-oriented algorithm” would shine more light on a problem which is often overlooked when discussing design, namely the goal, or conscious aim at which design is striving. Goal as opposed to blindly searching (and never finding - if the goal is not known!)
AI omitted here, but it would have much in common with Genetic Algoritms, although being “goal-constrained” due to learning before applying etc.
I don’t know of any universal commonly agreed upon definition of the concept known as “design”, but one can always compare with other concepts as to clarify what they have in common and what they don’t. For example the following concepts are in many ways different while some aspects, which are fundamental to each of them, are not always well understood (the three fundamentals, as I see it, are, purpose, goal, and means). A comparison matrix would look something like this:
Fig 1. Edit: More elaborate comment on the “means”:
My point being that if not fully comprehending the differences between design and “mindless form seeking” (without a foreknown goal), then you are at risk of trying to do something which is not what you think you are doing.
So, I’m all in for knowing the goal and purpose of any (meaningful) work, and in so doing apply suitable tools for achieving that goal, which in many (most?) cases is best suited by “explicit design”, or, the terms I prefer, by “goal oriented algorithms”.
Because it works well for 95 % all cases but its super annoying in 5 % of the cases.
I totally agree that these features are useful, but its not really helping me doing my job better. And faster is kind of a weird metric. Are you faster because of having better tools, or are are you faster because your editor lets you faster arrange components? Did you really experience Grasshopper being too slow in this regard? I would rather say, Grasshopper primary issues are the inability to handle larger definitions, its cumberstone way in including logical branches and its complicated data management. This is what I would like see to be improved, which would also improve things for beginners.
This is what I would see as a total overkill for 95% of all users. The average GH user doesn’t even know how to multiply matrices and how this turn out to be useful one day. Regression yes, in its simplest form might be useful. LLSquares. Approximation of point data… sure, but again most user want to automate CAD and not creating mathematical models. But of course GH1 offers a rich set of plugins to all these problems already.
I get the point, but I’m not refusing AI. I just question the usefulness in this particular field of application. CAD has clear advantages and disadvantage and depending on the complexity it is very accessable. The problem I’m having with AI is this big black box.
Only a few people understand what truly inside on how its applied correctly and thats the issue. If you gonna use an AI to validate a design for instance, how can you be sure that the assumptions are correct? You get a local extreme, but this is not what design is about? Or am I wrong?
Its even dangerous at some point, people claim, they a trained model which calculated the best outcome without even questioning anymore.
This link is a perfect example of why not to use an AI. What should the video tell me? This person is morphing a car a bit more to be a different class of a car. But this is not what designers want to know.
They already know how a limousine or sports car should look alike. Its absolutely pointless and a waste of resources if somebody would do something like this. Well, I’m not a car designer, but I can tell that at this resolution people indeed still using a pen. Because its easy and perfectly suited for this sort of task.