This is a short opinion piece I wrote a couple of years back which I don’t think was ever published. I was either too late with my submission or something else went wrong. Just came across it on my Google Drive by accident. If you’re into computation and architecture you may find it mildly interesting.
The Inevitable and Utter Demise of the Entire Architectural Profession ˂/clickbait˃
Recent years have seen an increase of positive popular culture references to cutting edge computational developments such as artificial intelligence, neural networks, machine learning, and other new-fashioned concepts. These algorithms have the potential to execute tasks well beyond the reach of conventional computing. Some of these tasks are currently performed by old-fashioned human beings, who have invested years of their lives in becoming experts and who expect to be financially rewarded for performing them. You may be—or perhaps one day hope to become—one of these human beings, in which case the prophecy that these algorithms are just around the corner may upset you. But fret not, for I have some good news. Incidentally there will also be a little bit of bad news, so you may want to fret some.
The good news is that these algorithms have been just around the corner for a good long while. Their arrival has been billed as imminent since at least the 1970’s, and while certain companies such as Google and Facebook may rely on them in their everyday functions, only big companies can afford to develop and maintain them in the first place. It is also easy to become unduly impressed by things like Google’s image recognition software, but if you think it is good at spotting kittens online, wait till you see me parse a photograph. I can even tell you whether the picture features a hungry, sleepy, scared or playful feline, and I haven’t had millions of dollars invested in my kitty detection abilities.
We are impressed by these artificial neural networks not because they can outperform humans, but because they outperform code written by humans, which really isn’t the same thing at all. The reason Google needs image recognition software is because it would cost too much to employ people to categorise all the images that make their way onto the internet every minute of every day, not because humans aren’t up to the job.
Incremental improvements in AI research certainly add up to an impressive whole at the academic level, but they are not the harbingers of the Imminent Singularity™. Paradigm shifting innovations tend to come out of the blue and are therefore utterly unpredictable both in their timing and their details. This is the bad news incidentally; whatever is going to make you unemployed is in all likelihood not something you will see coming from a long way away.
Yet it can’t be denied that computers have had—and will continue to have—a significant effect on architecture. CAD provides access to geometry which is cumbersome to describe or measure on paper. CAM provides access to affordable, yet accurate, bespoke elements. BIM promises to aggregate the totality of administrative, legal, and structural data. Of course a cursory glance at history reveals that byzantine structures with an abundance of bespoke detailing are not, by any yardstick, recent phenomena. The worth of CAD, CAM and BIM isn’t measured in units of innovative ingenuity, it is measured in units of time saved and money earned. In other words, these computer technologies are hardly manifestations of some unparalleled, paradigm shifting wizardry, they merely make certain buildings possible in the current socio-economic climate with high labour costs.
Nobody knows if—and especially not how—computers will surpass human intelligence. Any career saving strategies we wish to adopt had better be based on solid observation of the recent past, rather than wishful or panicked thinking about the distant future. Unromantic as it may be, any discussion about the place of both computers and humans in architecture must limit itself to the facts in order to be productive.
The most salient fact regarding computers is that they are very good at doing sums. The clue is right there in the name. A computer can calculate the standard deviation of a million numbers before your finger has had time to let go of the Enter key. The aphorism that it would take a team of humans ten years to do what a computer can do in one second, has the corollary that a computer can also make more mistakes in that very same second than you could ever hope to make in a lifetime. This is because, ultimately, a computer has no idea what it’s doing, let alone why it’s doing it. Understanding stuff is what humans are good at.
The introduction of computers in fields other than architecture hasn’t always been without issue and there are important lessons to be learned here. Famously, the introduction of autopilots in commercial airliners has resulted in a net loss of pilot skill. Most of the time the autopilot keeps the plane level and pointed in the right direction, but when something goes wrong the human pilot needs to take over. Unfortunately this now happens without the benefit of endless hours of active flight time, or indeed a clear understanding of the immediate events that preceded the failure. The problem appears to be that the human is employed to supervise a machine executing a monotonous task. This is not something humans do well. I appreciate that it is very tempting to delegate boring work to a machine, but if the machine can’t be fully trusted with this task, it may make more sense to have the computer supervise the human instead.
If architects wish to more fully integrate computers into their practice, yet not be displaced by them, it is vital that humans are allocated those jobs that humans do well, whilst computers are allowed to focus on what they do well. The architect remains the designer, while the computer becomes the critic. Does this building violate envelope restrictions? Will this façade melt cars at a thousand paces? Does the piping intersect itself anywhere? Can you actually see the major landmarks from the penthouse offices? Architects need to ask themselves whether they would rather second-guess computer generated designs, or whether they would prefer to have computers audit human creativity.
Since many constraints are project specific, architects will themselves have to transcribe these constraints into computer algorithms, leading us to the concept of responsibility, the core ethical problem of computing in architecture. If algorithms are awarded salient roles in the design process, who is at fault when they go awry? Can the architect be held accountable when a closed-source algorithm written by some third party is flawed? How about if the algorithm is open-source, but the architect doesn’t bother to –or doesn’t know how to– appraise it? What if an algorithm is unwittingly applied beyond its scope?
Since the reality is that the architect is almost always legally responsible, it is my conviction that she should do whatever it takes to claim creative responsibility as well. This requires a decent understanding of algorithmics and computational theory, as well as at least a vague understanding of any relevant algorithm, especially its boundary conditions and failure points.
If the architect is to claim responsibility over the outcome of a computation, then that outcome must permit rigorous evaluation. Unless you know how to justify every single design decision your computer has made on your behalf, your career in architecture may well be forfeit.
In conclusion; there should be no reason why algorithms—traditional or “intelligent”—pose a threat to the continued employment of the architect, provided each party keeps to their respective spheres of expertise. Computers have the capacity to harm only when they are employed unthinkingly or their output is allowed to go unchecked.