[Unpublished opinion piece]

This is a short opinion piece I wrote a couple of years back which I don’t think was ever published. I was either too late with my submission or something else went wrong. Just came across it on my Google Drive by accident. If you’re into computation and architecture you may find it mildly interesting.

The Inevitable and Utter Demise of the Entire Architectural Profession ˂/clickbait˃

Recent years have seen an increase of positive popular culture references to cutting edge computational developments such as artificial intelligence, neural networks, machine learning, and other new-fashioned concepts. These algorithms have the potential to execute tasks well beyond the reach of conventional computing. Some of these tasks are currently performed by old-fashioned human beings, who have invested years of their lives in becoming experts and who expect to be financially rewarded for performing them. You may be—or perhaps one day hope to become—one of these human beings, in which case the prophecy that these algorithms are just around the corner may upset you. But fret not, for I have some good news. Incidentally there will also be a little bit of bad news, so you may want to fret some.

The good news is that these algorithms have been just around the corner for a good long while. Their arrival has been billed as imminent since at least the 1970’s, and while certain companies such as Google and Facebook may rely on them in their everyday functions, only big companies can afford to develop and maintain them in the first place. It is also easy to become unduly impressed by things like Google’s image recognition software, but if you think it is good at spotting kittens online, wait till you see me parse a photograph. I can even tell you whether the picture features a hungry, sleepy, scared or playful feline, and I haven’t had millions of dollars invested in my kitty detection abilities.

We are impressed by these artificial neural networks not because they can outperform humans, but because they outperform code written by humans, which really isn’t the same thing at all. The reason Google needs image recognition software is because it would cost too much to employ people to categorise all the images that make their way onto the internet every minute of every day, not because humans aren’t up to the job.

Incremental improvements in AI research certainly add up to an impressive whole at the academic level, but they are not the harbingers of the Imminent Singularity™. Paradigm shifting innovations tend to come out of the blue and are therefore utterly unpredictable both in their timing and their details. This is the bad news incidentally; whatever is going to make you unemployed is in all likelihood not something you will see coming from a long way away.

Yet it can’t be denied that computers have had—and will continue to have—a significant effect on architecture. CAD provides access to geometry which is cumbersome to describe or measure on paper. CAM provides access to affordable, yet accurate, bespoke elements. BIM promises to aggregate the totality of administrative, legal, and structural data. Of course a cursory glance at history reveals that byzantine structures with an abundance of bespoke detailing are not, by any yardstick, recent phenomena. The worth of CAD, CAM and BIM isn’t measured in units of innovative ingenuity, it is measured in units of time saved and money earned. In other words, these computer technologies are hardly manifestations of some unparalleled, paradigm shifting wizardry, they merely make certain buildings possible in the current socio-economic climate with high labour costs.

Nobody knows if—and especially not how—computers will surpass human intelligence. Any career saving strategies we wish to adopt had better be based on solid observation of the recent past, rather than wishful or panicked thinking about the distant future. Unromantic as it may be, any discussion about the place of both computers and humans in architecture must limit itself to the facts in order to be productive.

The most salient fact regarding computers is that they are very good at doing sums. The clue is right there in the name. A computer can calculate the standard deviation of a million numbers before your finger has had time to let go of the Enter key. The aphorism that it would take a team of humans ten years to do what a computer can do in one second, has the corollary that a computer can also make more mistakes in that very same second than you could ever hope to make in a lifetime. This is because, ultimately, a computer has no idea what it’s doing, let alone why it’s doing it. Understanding stuff is what humans are good at.

The introduction of computers in fields other than architecture hasn’t always been without issue and there are important lessons to be learned here. Famously, the introduction of autopilots in commercial airliners has resulted in a net loss of pilot skill. Most of the time the autopilot keeps the plane level and pointed in the right direction, but when something goes wrong the human pilot needs to take over. Unfortunately this now happens without the benefit of endless hours of active flight time, or indeed a clear understanding of the immediate events that preceded the failure. The problem appears to be that the human is employed to supervise a machine executing a monotonous task. This is not something humans do well. I appreciate that it is very tempting to delegate boring work to a machine, but if the machine can’t be fully trusted with this task, it may make more sense to have the computer supervise the human instead.

If architects wish to more fully integrate computers into their practice, yet not be displaced by them, it is vital that humans are allocated those jobs that humans do well, whilst computers are allowed to focus on what they do well. The architect remains the designer, while the computer becomes the critic. Does this building violate envelope restrictions? Will this façade melt cars at a thousand paces? Does the piping intersect itself anywhere? Can you actually see the major landmarks from the penthouse offices? Architects need to ask themselves whether they would rather second-guess computer generated designs, or whether they would prefer to have computers audit human creativity.

Since many constraints are project specific, architects will themselves have to transcribe these constraints into computer algorithms, leading us to the concept of responsibility, the core ethical problem of computing in architecture. If algorithms are awarded salient roles in the design process, who is at fault when they go awry? Can the architect be held accountable when a closed-source algorithm written by some third party is flawed? How about if the algorithm is open-source, but the architect doesn’t bother to –or doesn’t know how to– appraise it? What if an algorithm is unwittingly applied beyond its scope?

Since the reality is that the architect is almost always legally responsible, it is my conviction that she should do whatever it takes to claim creative responsibility as well. This requires a decent understanding of algorithmics and computational theory, as well as at least a vague understanding of any relevant algorithm, especially its boundary conditions and failure points.

If the architect is to claim responsibility over the outcome of a computation, then that outcome must permit rigorous evaluation. Unless you know how to justify every single design decision your computer has made on your behalf, your career in architecture may well be forfeit.

In conclusion; there should be no reason why algorithms—traditional or “intelligent”—pose a threat to the continued employment of the architect, provided each party keeps to their respective spheres of expertise. Computers have the capacity to harm only when they are employed unthinkingly or their output is allowed to go unchecked.


Conclusion #2 (regrettably not my own): Machines should work; people should think.


It seems from many posts here that appear to be from first year architecture students, they expect the software to be an oracle.telling them what to do.There’s a difference between undertaking a project with a definite goal in mind, then having a serendipitous moment along the way, and depending entirely on serendipity. Asking the software to spit out a multitude of forms and then choosing the best from among them is more like a job for quality control. On the other hand, with the advent of neuroaesthetics, where “best” can be ostensibly be quantified, that might be where we’re headed.


I understand that this came from that media wave that AI was going to replace architects, right? In general I agree but there are things that I see differently. Sorry for extending myself so long. :sweat_smile:

After the AI winter, ML really started to become popular in 2010-2012, since deep learning.

It is not true that it takes a lot of capital to deploy ML, especially these days. These heavyweight companies spend so much money simply because they can, not because they have to. They can use brute force to train networks because it is a path worth exploring (see GPT-3). If you need to recognise anything, you need thousands of examples of anything. Those companies have big ambitions, there are also not so many experts so their price is high, but that doesn’t mean that it takes millions to use ML. Proof of this are all the ML competitions where independent and skilled researchers solve really difficult and otherwise impossible problems. In addition, there are a hundreds of small companies offering ML services.

Machines can understand and reason. They understand numbers like our brains understand electrochemical signals, the lower level does not harbour higher level meaning. A software with a sufficiently large knowledge graph and NLP can understand as a human, because in the end understanding consists of relationships (subject, relation, predicate, context). Our advantage is that from birth we are training our knowledge graph, but there is no evidence that a machine cannot replicate it if given enough experience. There are reasoning algorithms, which is more sophisticated than understanding, where they infer knowledge that was not in the input data, because knowledge can be represented in a way that machines understand. That they interpret information differently does not invalidate them from understanding it. What happens is that we are a long way from giving a machine all the experience and knowledge necessary to understand a concept as we can. The same goes for humans, explain to someone in an unvisited tribe what a mobile phone is, you can’t, in the same way that an ML algorithm trained with the knowledge graph of living things can understand what a stone is, they lack the relationship to infer understanding.

Even more exotic forms of understanding other than language-based, such as spatial/visual/geometric or emotional, can be simulated by machines, if not today some day, because we don’t work with magic. In the end these debates often end up in semantic or philosophical positioning of what exactly is meant by what, but in actual practice, the result will be equivalent.

If we ever allow software to be legal agents or agents with rights (and responsibilities), we will unleash a terrible world. Machines are tools, not dignified beings. The responsibility lies with the person who approves the tool’s output or the company providing a negligent service.

I fully agree. To complement this, the trend that we are coming from, that the user is stupid and does not need to know what they are using, is completely irresponsible and creates a gap that separates a certain technological elite who are allowed everything, from the users who use that technology without being aware of what they are doing. Technology should be like food in the supermarket, red if it is unhealthy, green if it is good. Everything that is coming is alarming if we do not establish ethical principles and access to information.

I don’t think differentiation is the key here. You were overly categorical separating the human to design and the machine to critic, because there is or will be too much overlap there. ML will be very useful for designing, at least in the initial phase of the creative process where ideas are prototyped or played with. If you design shoes, instead of parameterizing measurements, we will parameterize features, instead of having size, style, you have sliders that go from more party to more office, from more classic to more modern, and things like that, this has already been done in 2015. Tools like this allow you to swap over auto-generated templates based on your design requirements, or as concept designs or inspiration. On the other hand, the human has to evaluate the final result of the expert system, because in complex systems there is never such an expert system that always always works well. The variability of what can go wrong is usually greater than the variability of the capability of anything. The designer of the future will write a sentence describing the model and the machine will give him the model, or describing an environment with images, it can suggest key elements to integrate your model into the environment. On the other hand, a machine can give you an optimal distribution of material to support loads but you as a human see that ugliness and you can retouch it to make it more aesthetic. Anyway, I see a lot of overla, and that it is too complex to clearly differentiate the role of human and machine.

It is false that machines are going to eliminate architects, but it is true that the demand for architects have to decrease because there are more and more tasks that can be automated and there is less and less building space (at least in this planet). I don’t think it happens quickly because although what ML does seems like magic, the truth is that this magic is the best results, often not even the majority as so good, often part of the trained algorithm is released and the developers reserve the fully trained version for themselves. On the other hand, in my opinion, I do not think that the initial trend is to see architecture software empowered with AI, I believe that companies that do AI projects will come first with more and more ML influence. The most difficult thing about ML is not the algorithm, but to get the training data or how to reformulate things to adapt them to automatic learning, then it is a discipline that making local applications is achievable, but making them generic is crazy and more in a discipline as complex in as many aspects as architecture. So I see more likely for the near future projects that use ML in ways that would make an architect worry (which probably already exist) and not so close that architecture software implements these tools. It all depends on how long it takes to get enough riched data to train existing algorithms.

1 Like

Rhino based Finch3D is getting closer, at least in some aspects of ‘automated design’.


== The Year Civilization Collapsed ==


A software with a sufficiently large knowledge graph and NLP can understand as a human

Oooops! Slowly recovering.

from birth we are training our knowledge graph, but there is no evidence that a machine cannot replicate it if given enough experience.

Absence of evidence replaced with considerable amounts of faith.

…they infer knowledge that was not in the input data

Like the point on a line between points, I presume?

… explain to someone in an unvisited tribe what a mobile phone is, you can’t

Try to explain what a human is to someone strongly opinionated about ML. But limit your expectations for success.

such as spatial/visual/geometric or emotional, can be simulated by machines, if not today some day

Exactly HOW MUCH faith do I have to work myself up to before I can take that “some day” claim seriously? Isn’t that an example of pure faith? Some day… ?

because we don’t work with magic.

What if you did just that, mixing in mere speculation into a serious matter about what a human really is.

If we ever allow software to be legal agents or agents with rights (and responsibilities)…

Would it care more than Stalin and the Nazis did about eliminating any “obstacles” in its desires?

A certain attitude (…) towards other people is a fairly reliable indicator of personalities which cannot sense a difference between a human and a machine.

Machines are tools, not dignified beings.

If both are being functionally equivalent knowledge graphs, humans have no dignity.

Is “dignity” earned or ascribed? If earned, does boomers qualify? If the latter, who asserted the dignity of what is you? Open question.

that the user is stupid and does not need to know what they are using, is completely irresponsible

Or, the officer putting a machine-gun in the hands of an untrained soldier and then leaves the soldier to find out whatever he can find out about the gun, even without knowing it’s a gun, who’s irresponsible in that case?

Everything that is coming is alarming if we do not establish ethical principles and access to information.

We do have ethical principles. But it seems they have to be followed to make any difference. Didn’t work out so well, so far?

… and access to information.

Does that involve boomers and people with different views of what is “ethical”? How do you define “ethical”? With machine guns?

the human has to evaluate the final result of the expert system,

Why? (given the hopes you have for the future of machines)

… because in complex systems there is never such an expert system that always always works well.

Huh? Machine was said to become just as capable as humans, or did I read that somewhere else?

( big snip of… a world of parametrized templates of… spooky ideologies )

It is false that machines are going to eliminate architects

Not if the reach our level of knowledge and capabilities, including the emotional stuff.

there is less and less building space (at least in this planet)

I don’t believe so. First they said we are over populated (we were some 3 billion when it was already too many). Then we learned to produce and distribute food more efficiently. So let’s talk about something else. OK, so now there’s not enough space for buildings.

I do not believe you.

The most difficult thing about ML is not the algorithm, but to get the training data

That’s how we, the users, became the product in today’s social media. Without all the intricate photos and texts provided by the masses, for free, there would be no business in neither ML nor selling your privacy all over the place. Refined by ML of that very same data. It’s about identifying the product which can be sold, including the soul.

What really is of concern is how to maintain knowledge and skill to maintain the world we have already created. ML is only one of the factor’s that threatens to push us over the edge. Here a link to a thought-provoking talk on the subject of “Preventing the Collapse of Civilization by developer Jonathan Blow:

Preventing the Collapse of Civilization / Jonathan Blow (Thekla, Inc)

And another talk from which Jonathan got some of his ideas, by Dr. Eric H. Cline, Professor of Classics and Anthropology and the current Director of the Capitol Archaeological Institute at The George Washington University. He was considered for a Pulitzer Prize for his recent book … :

1177 BC: The Year Civilization Collapsed

The Q: Will it happen again, and why? (many relevant hints already visited above…)

I agree with most of what David Rutten said in the Original Post, which i think is the result of some Deep Learning on a truly human level.

// Rolf,
Yet Another Boomer

Edit: Just so that the consequences of the emerging boomer contempt isn’t entirely lost on the readers, as one of many other aspects of über-complex technology, I’d like to add the following slide from the talk by Jonathan Blow one long term consequence, experienced over and over again in history, why the nowadays increasing contempt or outright hatred between generations is especially dangerous in the high tech world of ours (just before this slide he gave an example of how the early micro chip designers messed up due to lack of transfer or deep knowledge about what they were doing). Taken all together, the conclusion would be that:

“Nothing new under the sun” wrote an ancient king, which knew that stuff like this had already happened several times in the most ancient of days.


just like my dog lol!


That’s what “scares” me.
I’m having a very bad experience on Fiverr, where I hoped to round up some cash from who want to skip this forum and get things done.
Workers and professional environment? Usually no problem!
But most of times they are university students with NO IDEA of what they want! They attach a couple of pictures (completely unrelated eachother) and sort of tell me: “Do the thing”.
No joking.

I can see how that can be “scaled differently” in university context: professors are implementing grasshopper as a “gadget” or “ornament” and think students will magically level up in some way.

In my first week of grasshopper I did made an origami placer with kangaroo and put the video on youtube.
A lot of architects did contact me along the years asking me for the definition, and to all of them I told: “please don’t use this for structural stuff! Origami are meant to fold!” And their response made me understand they care more to impress than actually make a proper use of gh.
A guy made a full project for a huge airport with “roof origami” with my definition, but the underneath structure seemed to be made by a 10yo kid using sticks and straws.
I’ve stopped caring at that point.

Pearls to pigs.


Hi David,

Great piece! Sometimes when I see all these really fucked up buildings (and products, furniture, cars, jewelry) created computationally without good designers’ craft, taste, experience, love, or supervision I wonder what people like you, who created these tools, might be thinking…





If our civilising collapses as well, my bet is definitely on the sea peoples.


It is quite different, that information comes from a linear explicit function f(a,b,t) = pt. Inferring requires deducing new events from given evidence. If you know that an animal is a living being and tell you that cat is animal, then you can infer that cats are a living being. What Siri or Alexa or any voice assistant does is use a knowledge graph to infer the user’s intention, since programming everything explicitly is impossible.

You can take it seriously when you graph advances over time, you can understand the trend for the future. Emotional recognition already exists with various types of input data, and even if it is done semantically, nothing prevents it from being done by chemical concentrations. There are no restrictions, it is simulating equivalent behaviour of each of our senses and abilities. You can argue about the meaning of each thing, but the results are equivalent when a person does not distinguish whether they come from a machine or a person. Faith is believing that we are special and magical, not make reasonable predictions.

Sorry but nobody here talked about what a human being is. I talked about understanding and reasoning by machines, saying that they already do an equivalent result in specific domains. I am saying that we do not function with magic as a way of saying that everything is material, that everything, even what science cannot explain, is physical. From a thought to a transcendental experience, it has a physical mechanism. Faith is believing the opposite because instead of thinking that the unknown is like all other physical things, it is given a unjustified special character. And if something is not magical, i.e. is physical, it is likely to be computable. Aren’t these facts falling of their own weight? isn’t faith necessary to think otherwise?


I’m going to stop wasting my time with you. I have not reduced the human being to a knowledge graph (which is a form of representation) as you have done. A living being is dignified implicitly, by the fact of being alive. It is simple respect for life forms.

The job of an architect is one of the most demanding jobs I have seen. On the other hand I’ve hardly seen so many crappy statements of similar professions. There is a large chunk a ‘blabla’ in almost all these articles, interview and broadcasts. In the end it rather seems like a hymn of self-praise than content with substance. When trying to seek for motivation why people use A.I., only a few have actual use-cases worth to solve. It is really hard to find out about the why.

Sometimes it rather feels like people are bored with designing or work in general. They dream to download an app, tweak some parameter and press the computation button.
Cherry picking, like any of these 18 year old instagram girls, job done!

That attitute and lack of motivation is actually the higher thread to someones future. :slight_smile:


This is spot on. I’ve worked with many designers and architects, and very few of them have demonstrated any genuine interest in understanding and encoding their design intent. Engineers on the other hand though, which all makes sense I suppose (early stage design, wicked vs tame problems, crayons, etc.)


Nice. I will have to go with Baudrillard on this one, the collapse of the real has already happened.

The simulacrum precedes the original, which in a way is what we do in Rhino. The hyperreal will be the integration of the virtual into everyday life, making it the new real.

We’re not running out of space to build. Earth’s surface is about 500 million km2. 70% of that is water and we have barely occupied water to begin with. Now, regarding “Landcover”, It has been estimated that by 2030 global urban landcover will be 1.5 million km2, very little compared to the total 510 million. If we assume that the only “good way” to settle on a place is in an urbanized matter, not suburban or crop land, then all the 12.6% est. of land that is crop land can also be considered “urbanizable”. I am not saying we should occupy all of these, and I am not commenting on ideals but I think we are far from “running out of space to build”. Many cities have very bad area to density ratios. Very spread but not very dense within the metropolitan area which is very deceiving when analyzing these numbers because you often see a city described as dense compared to rural areas. So, I think we haven’t even begun to properly occupy Earth, and we haven’t occupied most of Earth. Sadly we have spread in a very bad way (crop lands, suburban, slums) across about 19 million km2 of the 149 million km2 of total posible land surface to cover. (I am not sure if by cover the mean urbanize). I don’t think it is a good strategy for our societies to continue the trend to increase expansiveness (urban sprawl). I do feel we should increase globalization rate.

1 Like

Great op-ed.:bulb:

I think how the architectural profession uses computational design will follow the dunning kruger curve. We have a few more cars to melt, yet.

1 Like

Here another exemplar case of what probably is an architect student that tries to have things done:

I don’t know what scares me more, the audacity (aka disrespectfulness) …
… or the fact that, if he is trying this way, it means that in other situations this “do my uni homework” thing actually works out!

This… audacity to attach many files (which each of them need to be evaluated) and even “knowing” it should take just 5 hours total! … for 50 bucks!

A “character” like this should design something where people would live inside?

Chills down my spine.


The question is who’s fault it is? Isn’t it the egoistic society we experience very badly these days, or is it the shitty education system which even indirectly supports this mentality. Or all the politicians (= “role models”) we know having had ghost writters for their promotions. Giving 50 € for this is rather the fun part. It clearly shows that this person has no clue, and is probally falling on his/her nose soon anyway. On the other hand, people might change. And some are having really hard time understanding visual scripting but may be become really good architects… Who knows…

You can direct plenty of blame to the wonders of the “gig economy”, which is nothing more than a race to the bottom, the “Hunger Games” of the internet, where everyone becomes a disposable widget. This person, with some justification, did figure there was someone willing to do this for 50 €, and maybe there was, just not @maje90.

I’m sure there is a person doing this work for this money. But usually you get what you are willing to give… I wouldn’t even say this is a modern phenomenum, although its becomes much more obvious through the internet. I’m just saying everyone does mistakes and sometimes changes. We don’t know anything about this (young) persons motivation… Thats why I believe we shouldn’t pre-justice someones ability of being an architect/designer!

1 Like