In a recent panel discussion with Hernan Diaz Alonso, we exchanged differing opinions about digital computers but agreed on one crucial point: debates about digital computers versus hand drawings are over. The digital computer is not only an established part of architectural practice, but is central to it. This is because it can do things that hand drawing cannot do and in particular facilitate a type of construction ever more prevalent in the building industry. Then our discussion turned in another direction: the future development of computing machines in architecture.

My contention was and remains that there will be a resurgence of analogue computers, with which Hernan did not disagree. The digital revolution is over. While refinements in software and hardware will continue, digital computers have won their central role and will not lose it, short of a collapse of the present civilization. However, they have their limitations and these are already becoming clear. The breaking of the world down into infinitely manipulable bits and bytes leaves a vast empty space in human thought that cannot be filled, or should I say ‘represented’ in that way. Without representation in some form thought cannot exist. This is where analogue representation comes into the story.

First of all, what is an analogue? An analogue is something that shares certain qualities with a subject or an object under our consideration but not others. In other words, it is not a literal, ‘virtual’ representation of the subject or object, but rather a symbolic one, Analogue thinking is thinking in symbols and produces representations or (a better term) descriptions. The advantage this analogous kind of thinking has over literal descriptions is, in the first place, that it can describe things that have not been known or described before. Types of space, systems of order, even emotions. While this same possibility is often claimed for virtual representations or descriptions, I contend that every virtual invention has a history of models on which it is based. Let me give a small example.

I once asked my students, as an overnight sketch problem, to make a drawing of a being from another, very different world—an alien. The next day, the drawings they showed me looked like fusions of human beings and snakes, beetles, various plants, and so on. The point is, my students—very bright and creative—could only create hybrids of living things they already knew. Part of the reason for this, I concluded, was because they made representational, ‘virtual’ drawings. If they had given me a mathematical equation, or a matrix of different colors, they would certainly have been able to make aliens we had never seen before. At the same time their description would have had to be translated—interpreted—to have resulted in a conventional representation or portrait of the alien. Then again, why would we need to render them conventionally?

The answer to this question is: we are used to it. The rendering of what a thing looks like is the way we are accustomed to getting descriptions. The digital computer is so popular and accepted because it specializes in exactly this kind of description. However, it can describe what we don’t already know only in very limited ways, usually by montage or collage, that is, by combining things that we do know into descriptions of something we don’t. This is a serious limitation when we are exploring the unknown. In that situation the analogue computer is a far superior tool.

The most powerful analogue computer known is the human brain. Consisting of millions of neural nets—electrical circuits—it can compute numerous complex operations simultaneously—not only walking and chewing gum at the same time, but many other involuntary and voluntary body functions while working through subtle emotions and complicated philosophical questions, all at the same time. Minute after minute, day after day, throughout a lifetime. Some two billion neurons comprise the circuits in a nearly infinite number of continually changing interactions. The statistics go on, but the point remains that the analogue computer works by abstract descriptions, not literal ones.

Now, if the world inhabited by human beings could be controlled only by electrical impulses that could command bricks to be moved, concrete to be poured, steel to be made; or crops to be planted and harvested; or laws to be enacted and enforced, then the story could begin and end with the human brain. Perhaps that will be the future direction of human technological evolution on the planet. But until such a time, it will remain for us to interpolate between the analogical and the digital, between  abstract descriptions and the literal representations of things. To a large extent, the task of this kind of interpolation makes up the history of science and art.

The ‘education’ of the human brain is an ongoing, increasingly important task. But so is the invention and development of the technological prostheses we require to interpolate, to bridge the gaps between abstractions and representations. With digital computers advancing so rapidly, we have neglected the potentials of their analogue cousins. such as those that would enable:

—slum-dwellers to analyze their own complex communities, the better to organize politically and economically;

—urban planners to understand the continually-changing layerings of human activities within a dense city center;

—architects to incorporate available recyclable materials in the design stages of their projects.


A coming generation of analogue computers will differ from digital computers in many ways, but the most crucial is that they will each be designed and built for a particular situation and task, rather than as a ‘generalized’ machine usable for all situations. If we think about it, this follows the example of our brains, which would not serve a cheetah very well. Indeed, my brain would not serve you very well, as it is continually being constructed by my unique life experiences. But this takes us in a direction this post cannot go. It must serve here to say that the analogical might well be and perhaps should be at the center of the next great technological revolution.



WORTH READING: Though not directly related to the above, a recent article by Allison Arieff about writing architectural criticism raises some worthwhile points.

About this entry