There is much talk today about computation in architecture, not only its implications for the design and production of tectonic objects—from chairs to buildings and cities—but also its inescapable philosophical consequences. Understandably, most of this talk, by a few theorists and many practitioners, centers on the digital computer and its capacity for rendering complexity and simulating reality. I say ‘understandably,’ not because this is a proper focus for issues raised by computation, but because very powerful digital computers have become handy and accessible tools for everyone, so…why not use them?  Actually it is not the proper focus, especially if our interest is in philosophical domains such as aesthetics and ethics, and how architecture both embodies and enables them. An even more powerful and accessible computing tool—the human brain—should be our primary subject, and object, of understanding the nature and consequences of computation, for architecture and the world.

There are precise, historical reasons for this. It was advances in neuroscience, during the 30s and 40s of the last century, in understanding how the brain works as an electrical machine, a “biological computer,” that led to the rapid development of artificial, electronic computers. This advance—a leap, really—was prompted by dramatic discoveries, in the teens and 20s, in physics and mathematics, especially the invention of quantum mechanics and quantum theory. Niels Bohr’s Copenhagen Interpretation had radical and profound consequences not only for epistemology, but also for every branch of inquiry and practice. It states that when we describe with scientific precision any phenomenon, we must include in the description the manner in which we observed the phenomenon. The human brain and wider nature were thenceforth intertwined and inseparable; the old barriers between the ‘subjective’ and the ‘objective’ were shattered and a new era, the present one, began. 

Architectural theorists and experimental practitioners would do well to give more attention to cognition theory, its origins and contemporary forms, when considering concepts of computation. One of the key concepts to come out of cognition theory is ‘self-referentiality,’ having to do with the paradoxes created by the brain studying itself. Concepts such as ‘recursion’ and ‘feedback,’ ‘self-organization’ and ‘autopoesis’ are secondary consequences. Technological application, such as software design, communications networks, and their relevance for architecture—comes further down the line.

Indeed, anyone who wishes to understand the role of computation in human thought and activity must study the developments in this recent history, particularly in the field of cybernetics, which in the 50s and 60s, laid the theoretical foundations for contemporary cognition theory, general systems and information theories, which, in turn, underlay the rapid advances of computer technology. I close this post with a succinct essay by Heinz von Foerster, one of the founders of the transdisciplinary field of cybernetics. Hopefully, it whets your appetites for more.


“On Constructing A Reality” by Heinz von Foerster:


More on Heinz von Foerster and his colleagues:

About this entry