Thoughts about this claim about Grasshopper and AI

I don’t know in how much its applies for an architectual design process, since I started my career in the automotive industry and never actually worked as architect. I see things like this:

Grasshopper is a tradeoff. You get repetive things done without directly modelling it. But by doing algorithmic or parametric design you always give up control. This is why parametric CAD system don’t work for ‘Design’ so much, and why direct modelling CAD actually exists and is still widely used.

So I always asking myself is it really required to create a shape parametric or not?

I think the biggest misconception about Grasshopper or AI in Design is about the idea of having a superiour or next generation modelling/design process. This idea is actually based on a lack of knowledge. Many people simple don’t know how to do proper and efficent 3d modelling. This limits them in expressing their ideas (if they even have them). The idea is, if I don’t know maybe a piece of code will know… which obviously is an error.

If you decide to create parametric or even generative models, it doesn’t make sense to expose anything parametric. If a parameter has no relevance make it a constant. Nor it makes sense to create a full parametric model at all.

I would rather promote a divide and conquer approach. Reduce the parametric part to its bare minimum with manual pre- and postprocessing, even having multiple stages where a model becomes dumb again and get parameterized again. This is no flaw but rather an advantage.

It really is hard to create generalized solutions, but you can break it down to smaller groups. If you keep modularity, you can apply your modules on different design models.

Just a simple example would be a simple facade. If you have 20 pattern which work on a rectangular surface, you can apply them and create 200 permutations (by setting some but few parameters). These 200 variations may already be better then any AI could “produce”. Of course this input and output may be simplified, but constant refinement is a must. Something similar about primitive geometries. This is something which doesn’t work in automotive, but in architecture you could use gh to do simple form finding before doing manual refinement. So what I’m saying is that you create helper functionality not full solutions… And I still see no real need for AI in this process.

1 Like

Because you don’t want to see it. If you have thousands of permutations of a design, and you want the best 20 as automatically as possible, you need AI. If you have a production of products and want to analyze your production to fill gaps or understand why some work better than others, you need AI. If you want to map your parameter space to a more human friendly one, you need AI. If you want to know the constructive elements of a design that best fit a client or a briefing, you need AI. And more hidden utilities in another way of approaching problems. There’s a huge area beyond classification and recognition. Of course you can do everything differently, but if you get like that, you can do everything with worse tools. And if you haven’t seen this applied already, it’s because it’s a paradigm shift. Just as you can’t move from a 3d model to a parametric model, you have to redo it from scratch, AI applied to design also requires working in a different way. We’re in the zero phase of this, measuring. Detail modeling is a part of the design, not all of it.


I do actually see the theory behind this and how an AI could help, but i believe its not realistic and disproportionate. Its a total overkill to train an AI for such a difficult evaluation process. The idea is to get something better, but in the majority of all cases I have seen the opposite. Its rather that people just claim but don’t give any prove. The common practise of post-interpretation.

I indeed believe Detail is a core aspect of any good design. You can like a design or not, but if its well made people recognise its value. Also the opposite is true, a good design intent, which is not well made is a bad design. The challenge is very often reaching this detail and not finding a perfect design idea. It a complex puzzle to solve, and I believe in the superiority of a human brain with this. Thats all.

Innovation is just innovation if it does things better not different.

1 Like

So maybe I just ask a question here. Lets take the main domain of Grasshopper which is creating parametric patterns. How could an AI actually help me with this job?

Its not a rhetorical question. If someone can show me real benefits. I’m open for this…

That’s just not the approach. You can’t ask a fish to climb trees. What you can do is see how the fish can be good to you. Different tools have different domains of use. You can’t make the paradigm jump to using ML if you don’t have data to extract information from. GH allows you to generate those data, but for the moment just geometric data, it needs design data as well. But we designers are not used to think in terms of data engineering, like years ago it was not common to designers to think in terms of programming. In my view it is a logical step.

I’ll give you more examples. Seeing a car as a design topology, that is, all the possible relationships between the elements that make up the car design. How is this graph? an IA can display it in lower dimensions. If we put weights by frequency of use in the links between two elements, what shape does this graph have? If we relate this data to a customer graph, where the buyer’s profile is related to the purchased car, what shape would this graph have? Can we get new value from it? can we understand why a design works for a company? And if we reduce the dimensionality of hundreds of car parameters to 2 or 3, can we create other kind of tools? Can we explore the parameter space in a compressed parameter map for example? Can we study how to modulate better? how to generalize better? What if it simply makes you access the tool you need faster and gives you days of work throughout the year because it predicts what you want to design, as the phone text predictor help you? What if it allows you to convert a sketch or a sentence to a part of the definition or the model? Or what if we can model directly in a latent space, so that we have abstract surfaces on it that means something like you can interpolate between two parameter configurations without break the design if in that surface exists a non manifold curve that connect that two points. The mappings that an auto-encoder allows you to do go beyond my imagination, that’s why it’s an algorithm that has exploited the innovation in so many areas, design is not going to be an exception, maybe it’s just not how you imagine it. It’s not about doing what we can already do with GH. The ML is a revolution not because it allows us to do the same thing better, but because it has opened up a huge new range of tasks that can now be done, or are open to be developed. And GH is a perfect environment for working with design data, which is the base for theses new tools.

But it’s okay if I haven’t convinced you. Time puts everything in its place. There are hundreds of papers out there that show this is going to affect design as well. I’ll pay you a beer night if there’s no use of AI in design in five years’ time that’s really useful.

1 Like

I really don’t know. Just what you have said sounds so abstract that I still don’t exactly understand which problems should be solved by this. I’m curious about whats there in 5 years. I’ll accept :slight_smile: my guess we actually made little to no progress on this topic at all, so that you can say nothing really changed.

A lot of what you said assumes that you are dealing with parameterized models. At least in car design, you are miles away from this state. This might be better in architecture, but even there it is not that parametric models are normality.
This is actually the first problem to be solved. How can we efficently create design models which are fully parametric?!
In automotive, for some limited parts this is feasable and already in use. I’m not allowed to talk about this, but its not done in Grasshopper.
Maybe some anekdote from my past work. A couple a years ago I was actually working on a project where we tried to parameterized early (!) design parts on cars with Grasshopper. But it miserably failed. Mainly because of three major reasons: A.) Too many parameters to consider B.) Missing surface tools C.) Missing reusablity in combination with slow workflows.

Again I think the biggest hurdle to take is to adapt this theory to actually create benefit in a practical manner, and the reason I’m also so sceptical about this, is because people who do this kind of research are usually people who yet lack the practical side or even worse willingly ignore it.
Which results in the fact that theory and practical application are miles away from each other. It almost seems like its about solving problems which actually are no problems. :slight_smile:

1 Like

@TomTom, I think you identified the key difference between what you and @Dani_Abalde are saying. Much of what Dani said is above my pay grade, but if I understand him correctly, he describes myriad AI applications for operating in multi-dimensional parameter spaces and is challenging us to consider design workflows that operate in this “new world” rather than look for opportunities to apply AI within the “old world”. You seem to argue that there is much in the non-parametric “old world” that is still valuable and that AI is less useful there.

If my interpretation of your comments are correct, both of you make great points. Personally, I agree with @TomTom that many existing workflows outperform “innovative” new tech. I also agree with @Dani_Abalde that we need to challenge ourselves to identify ways to move our design process into a parametric realm where AI tools can be most effective. Just as hand sketching will never become obsolete, hand drafting certainly has.

Perhaps the question @TomTom is asking is which portions of the design process are ripe for AI and what would that look like? I’m curious what the community thinks about this.

I propose that all performance analysis is ripe for AI. In fact, any time we inform a design decision with data I think that process can be re-imagined to leverage new methods. For example, if we transition from analyzing specific design options that have already been conceptualized to analyzing the system of available design decisions (the parameter space or design space), we can then leverage a whole suite of powerful ML / AI / statistical tools to deliver insight into our design process. This can be done through robust conceptual analysis that identifies which direction the team should go, or through AI-enabled real-time analysis tools that assist the designer while they experiment. I think the sun is setting on workflows where engineers/consultants react to design options.

Non-analytical applications seem more difficult to cast into an AI world. I think @Dani_Abalde quote below shows one method for how to translate a human-centric design process into a multi-dimensional AI-ready design environment. If possible, this would be much more flexible than a strictly parametric script. Designers would guide the process but leverage AI to rapidly ideate.

Why don’t we have the man himself here and his thoughts on this. I welcome dear @jesper


@shackleton AI is an emerging technology and there is no telling what is going to happen in the future. Is Grasshopper the right tool, I guess we will see. What I do know is that is a fine place to explore whatever AI that is currently available.

I completely agree with:

Something like CoreStudio Damage Detector service is a great way to use Machine learning in a “traditional” way to identify situations from a large amount of data/videos.

There are a few tools to work with to experiment with AI:
Free Generative Design – A brief overview of tools created by the Grasshopper community

I would not forget LunchBoxML and the workshops that come with it.

We could look at wave functions collapse

Or perhaps running Rhino and Grasshopper inside CPython 3 using all the newest libraries for AI available.

Or maybe a good look at Optioneering, which is not AI but closely related.

Of course the is Galapagos in Grasshopper

Finch3d by @jesper is quite interesting also.


Hi @shackleton

This is my thoughts about AI in architecture article - archdaily

The videos you’re referring to are just a couple of print screens of our prototypes I’ve posted on my Instagram and instead of asking me what it is, a lot of people draw their own conclusions. So thanks for asking me @ajarindia :slight_smile:

There’s for sure AI involved in Finch, just not in the way people seem to think.


I don’t think there is a new and old world. We parameterize CAD data to change things faster, on the other hand creating a parametric model is a slower process. This means to work effective you cannot and should not parameterize anything.
Since AI in the end has a very strong relationship to statistics, the interpretation and selection of data is the key. If people don’t know how to interpret data and correctly selecting parameters, the outcome of any AI is pointless. You can de-improve things by accident if you blindly thrust it. Not to speak of the chicken and egg dilemma when it’s about arguing for design choices. Is it good because the AI found good solution or is it good because it was parameterized great (and the AI just could find good solutions because of it)? So was all the effort worth it, cannot be answered, if there is no validation and comparison. And last but not least, anything related to design is highly subjective. I love to live in a house 140 years old. I consider it better than many modern ones…



1 Like


There is no real dichotomy between subjectivity and objectivity, almost everything creative has both components. That property is not one metric going from one side to the other, but two different properties, two different metrics. More than being like this, it can be understood like this, but even both of them can be automated through ML, the proof is in the algorithms of recommendation of ads, videos or products, which approximate the result to your personal tastes or the statistical profile you belong to. You can go further, if you bias a GAN specialized in some domain of design, with a specific subjectivity, the result will still be within that subjectivity. Like everything, it is applicable to some things and not to others. In genetic algorithms or reinforcement learning, you can define the fitness/reward function with your subjective biases. The point here is, the applicability of the ML can absorb subjectivity in many many ways. We have a cultural bias to associate subjectivity with individualism, but most subjective things follow a statistical order (usually normal distribution) when you measure the subjectivity of many individuals.

I have not reread what I said, but what I remember was the intention to demonstrate the applicability of the ML in design. It’s not that we should do it, it’s that these are very serious business opportunities (also big fuck-ups) because the ML opened a new spectrum of problem solving and all industries are looking if there is something that now can be solved. I don’t think it will replace what already works very well, but new solutions will appear in the whole chain that goes from the designer to the final client. I have no doubt about that.

In theory yes. But predicting taste is really more a dream than reality.
If you get an advertisment based on AI, then this is because they have a dataset of your personal data. They don’t know what you like or not, they predict you could like Product X, because people having a similar data profile have bought it (which gives no feedback about if they like it).
Anyway this particular usecase is actually pretty limited and basic usage of “AI”. And even with that you get advertisments which don’t match. Just because you googled for Licorice, and someone with a similar profile has bought it, doesn’t neccesarily mean that you like Licorice. This explains why these adds often don’t match.

And this is my point regarding validation. Just because a database has data about something, it does not automatically mean this data is interpreted correctly. And you know what, from time to time people change their mind. I do like old houses, maybe this is just because I haven’t lived in a good modern one. If someone gives me something which complety speaks against my data profile, I can still like it more.

I’m not saying humans can predict better but we can talk better to each other. Communication is more then asking question. This is so important because there is a great portion of irrationality in us all.

I don’t agree with you because seems that you understand this as an exact solution problem when it is an approximation problem. The point is not to create algorithms that never fail, but algorithms that give you more value than you had before. Personal likes are not arbitrary combinations of our preferences, there is a huge causality in them. We are not so random, everything is related, the difficult thing is how to measure this relationship or how to model this causality. I am not denying the complexity that the problem has, surely it is more complex than the complexity of our more sophisticated future models, but that does not mean that they are useless or that it is a dream to approach subjectivity. There’s a reason why companies spend millions of dollars to get their hands on data, don’t you think this is enough validation?

If you agree with this but you still think the same, maybe you are approaching this from the semantic and saying things like that an algorithm predicts our tastes is not correct and I agree with that, literally it is not like that, that only someone tendentious or ignorant would say. But from a practical approach, it is correct to say that you can predict certain things based on certain data. The premise is very simple, if you have more knowledge about the customer, you can maximize sales by approximating the ideal product for the ideal customer, no matter how much you miss. And the more knowledge you have, the more you can infer and the less you need to measure the customer, because there is causality or behavior patterns, no matter if there are exceptions while it works in the average.

Imagine how useful it would be to have this kind of semantic search engines for companies like Ikea or any other company with a high variety of categorical products (many types chairs for example, but some are designed for the kitchen, garden, whatever) that can be categorized in thousands of different ways (for summer, for grandmothers, for singles…). As a customer, you label your subjectivity in these categories (even in others that do not exist but that a natural language processor can relate to) and it returns the products that most fit your tastes. Sales up, purchase time down. There are primitive versions of this kind of things and there is no technical or theoretical limitation to better approximate the tastes of the customer. This is my only point, it’s a complex problem depending on the measure you want to do.

For the world of design, recommendation systems are the least of it. This is more relevant for wholesalers or retailers. But in the opposite direction it is relevant for the designer or the manufacturer, if you can obsessively measure the components of your product and associate it with sales, you can infer the properties that make your products popular so you can design new ones to approximate the tastes of the market, even this being a dynamic and ephemeral system, you can make better decisions thanks to this trend analysis with steroids. Similar for other types of applications within the design, the goal of the tools is to help us, everything can be modeled and the challenge is not to make it realistic, but to make it profitable.

Lots to unpack. I’m going to break this down point by point because I really value your skepticism and want to sharpen my own thoughts.

To clarify, I don’t use the new/old world analogy to disparage traditional workflows. Rather, I use it to imply there is a new frontier available to us that is largely unexplored. If we define this new world as a realm where AI is effective, what does it look like? What workflows / design processes are included here? I propose that this new world contains workflows that are highly parametric, multi-dimensional, complex, cross-disciplinary, and data-centric. Current workflows characterized by optioneering, linear decision making, and rules of thumb are particularly bad at addressing complexity and tradeoffs compared to the capabilities AI/ML approaches offer. Therefore, I think AI will revolutionize performance analysis and data-informed design workflows.

This is a bit absolutist compared to your previous nuanced comments but I totally agree that there are tasks that are more effectively accomplished through direct modeling than parametric modeling. I also agree that it’s easy for us to throw cool tools at the wrong problems. But just because parametric modeling can be misused doesn’t mean it should never be used. If the goal is speed to completion, manual modeling is often faster when creating one preconceived option. If the goal is to explore many variations quickly, or to revisit a design over the course of a long project, the time spent on parametric modeling can easily outperform manual modeling. Furthermore, if parametric modeling of a design problem brings us into a new world where AI/ML/generative design/optimization is effective, as @Dani_Abalde implied, the cost/benefit proposition of building a parametric model improves. I also think we need to focus on designing better not faster if we want to meet our 2030 sustainability targets. Finding a way to both would be great.

I agree that interpretation of the data is critical. Garbage in, garbage out right? I have found success applying data-centric workflows within integrated teams where the expert acts as the data interpreter for the larger design team. However, I’m becoming more confident that improved software and AI type applications may be able to automate this “interpretation” process. Automating this interpretation with simple linear-logic / parametric / non-generative coding is limited, but AI type solutions hold great promise. Expertise that is difficult or impossible to code using traditional methods can be “learned” if we feed the AI the correct experience data. I strongly feel that the creation of these AI need to be done by experts, and done carefully, to avoid the pitfalls you identify.

@Dani_Abalde recent post makes a strong case about the potential of AI to assist subjective workflows. I won’t try to re-state it. I think the theory is sound but would like to see a real-world application. One idea I had was to have designers look at images to rate view quality (VR would be the best medium). This data could then be used to power a view quality analysis engine. In this way a subjective measure of taste (is the view good or bad) could be reasonably predicted for an average user and used to evaluate various design iterations. This approach could be applied to various subjective measures. Although the idea of a black box beauty generator seems far fetched to me, I feel confident that smaller targeted AI can easily be developed to assist a designer.

If the quality in your design discipline can be measured, then an expert system can be created based on AI, as is already done in medicine. But quality is not subjective, in my opinion it is mostly objective, because usually what seems to do some good is shared by the majority. But it depends on how you define quality, of course.

This is already done with interactive genetic algorithms like Biomorpher, an optimization technique that involves the author’s perception in the selection phase.

But if it is not an expert system, the way to predict subjectivity is not to pretend to predict the general, but the concrete, many concrete things. It will be very difficult to predict this fits this profile of subjectivity if what you measure is everything (like measuring whether it is good or bad/the whole subjective validation), because the variation of dependencies in this space of possibilities is ridiculously large. But you can learn or predict specific things (like measuring whether this space is relaxing or whether this facade design is attractive to go inside), you can find statistical patterns that relate the dataset (people opinion/feelings plus the design definition) to the measurement, and with this model predict what percentage in a new design satisfies the measurement. Is my building similar enough to the surrounding architecture? If with these feature validators you include them in a generative model, you can present to the customer a parametric model that instead of having scalar values of measures/styles/ whatever, has it of subjective features like relaxing/active, striking/common, modern/classical and adjust the parameterization of the model under the hood by using these features. I have no doubt that it can be done because I have seen similar things. My doubts are whether this is really profitable, because as you mentioned, it is not about innovating by itself but about solving real problems.


Machine learning libraries are not written in Python, actually it’s a C++ code interfacing to Python. People training their neural nets are using Python, many of them don’t even know how to implement a complex neural net. Besides this, AI is more than machine learning. A genetic solver, just as Galapagos is, is also part of what we call AI, but it’s not learning in that sense.

@Dani_Abalde and @leland.curtis

Really it’s not that I’m not getting your argumentation. You both are just much more optimistic about the benefits of AI in design and architecture. You know I really ask myself how can use it in a way that it solves a problem. You can call me narrow-minded, but I really see no effective use case for anything I’ve done in my entire career. Of course I’m not working as an architect, I have never done it professionally, but when I remember it from internships during my studies, there is probably 1 or 2 things where it might fit. But all of them are not directly connected to the process of designing, but are rather related to workflow improvements.But in both cases it would be an absolute overkill to apply it.
I mean I see the potential, but I also see the difficulties. Hardly anyone, including myself, truly understands this technology. People use it, but with controversial outcome. So some invert the flow of causality. Giving meaning to something, which has no meaning.

Regarding Neural Nets and Deep Learning.
In the end it’s an algorithm. It’s not so different to other code. It’s greatest advantage is its greatest disadvantage. The person who trains it, does not need to create the logic by himself. But since that person doesn’t know what’s going on the hidden layer of a neural net and how the neural was implemented, this person is also not able to modify the logic. If you don’t like the outcome, you need to retrain it. In the end you have to live with what you get. So it may creates billions of variations, but not a single variation may truly fit.

I’m not even sure if predicting a design choice is something useful. I mean isn’t a designer or architect also hired to aid people in finding design choices for a person which has difficulties with it?! You do something, and your client says if he like it or not. Simply understanding your client, will help you with that. If you don’t know it, how could you train an algorithm to find out? Isn’t that contradicting?

In any case. I believe we are somehow spinning in cycles here. I guess I leave it like it is now :wink: Still, it was been nice conversation.