In June 2024, during a visit to the Herzog & de Meuron archive in Basel, something became impossible to ignore. The building is a purpose-built tower with apartments on top of five floors of archive. Nearly five hundred projects are represented through carefully curated physical objects: sketches, models, material samples, details, studies. Walking through the stacks was like stepping into Ali Baba’s cave. Well, for an architect. The quantity of work was as striking as what it made possible: tracing the evolution of an idea through myriad variations and decisions, the thinking made physical and traversable. The provenance of each design judgment was not documented. It was present. One could stand next to it and follow the arc from first sketch to built form, unpacking what was decided as well as how the decision evolved.
.png)
Images taken by Lira; placeholders in hope that we get permission for an interior shot.
This is what architectural knowledge looks like when it is kept whole. Not summarized, extracted, or reduced to rules, but embodied, inspectable, alive to revision. It raises a question: what would it mean for an AI-powered design tool to do the same, to hold architectural knowledge whole, in a form that remains inspectable and contestable across generations of practice?
The HdM archive suggests the shape of the answer: knowledge held whole is itself a form of calibrated restraint. It shows enough to be traversable without reducing the work to rules or instructions.
This essay argues that the most capable, empowering intelligence is the one that exercises the most deliberate restraint. Not restraint as a limitation — a system not yet capable of doing more — but restraint as a design decision: a continuous, context-sensitive judgment about what to resolve, what to surface, what to remember, and what to leave for the architect to work through. Call this the good enough intelligence. Not unfinished, but curated and calibrated: a viewpoint embedded in every interaction the system offers, grounded in a specific theory of how architectural knowledge is produced.
Things to Think With, Not Things that Think for You
The developmental psychologist Edith Ackermann spent decades at the MIT Media Lab studying how people (children and adults alike) construct understanding through the act of making. Her central contribution was the distinction between learning about something and learning through something. Learning about is consumption: someone presents you with information, and you absorb it. Learning through is construction: you build something, encounter resistance, adjust, try again, and the understanding emerges from the process of making. (Ackermann 2001)
Imagine an architect who has spent days working through the adjacency logic of a hospital wing: moving departments, testing circulation routes, discovering that the relationship between emergency intake and imaging merits a decision that reshapes the entire floor. They have constructed a kind of knowledge that cannot be acquired by reviewing a machine-generated layout, no matter how optimal that layout might be. The struggle with the problem is the understanding of the problem.
Terry Knight, MIT Architecture Professor in Design Computation, arrived at the same conclusion from a formal direction rather than a phenomenological one. Studying how students learned through the application of shape grammar rules (explicit, step-by-step transformations that generate designs), she observed that slow, by-hand rule application consistently produced deeper understanding than computer-implemented equivalents. What was lost in the computer version was not the output but the understanding of why the output was what it was. (Knight 1999, 6)
That phrase — things to think with, not things that think for you — captures the proper relationship between a learner and an intelligent tool. The tool should generate representations that are provisional, manipulable, and inviting of intervention. It should produce something the architect can work through, not something the architect merely works on. Especially for junior architects. The difference is between a design process that produces understanding and one that produces only output.
“Make Me a Gehry Building”
A phrase like this makes architects cringe. The cringe is not about whether AI will take architects’ jobs. It is about what the phrase reveals: a fundamental misunderstanding of how visual and spatial thinkers think. The assumption is that the design exists as a describable outcome, and the labor of getting there is an obstacle to be eliminated. But for an architect, the labor of getting there is the thinking. The spatial reasoning happens during the act of working through the design, not before or after.
The obvious counterargument is worth taking seriously. Constraint and speed are not always the enemy of thinking: extreme deadline pressure to complete an interior retrofit can generate insight that open-ended exploration never reaches. The argument for AI-generated options is partly this: more alternatives faster means more decision points, more chances to discover what you want by seeing what you don’t.
The argument misunderstands what kind of thinking is at stake. The productive speed of a deadline works because it forces the designer to draw from already-constructed knowledge and shortcuts built through thousands of hours of resolving complicated problems. Unrelated example: Does one speed up meditation? The duration is not an inefficiency to be optimized; it is the practice itself. The same is true for the creative and exploratory phases of architectural design. The time the architect spends inhabiting possibilities (testing, rotating, reconsidering, living inside the spatial logic of a problem) is where understanding is constructed.
What is lost is not the output. It is the productive friction of working through the problem — the tacit knowledge built by resolving details, the embodied understanding that comes from doing the labor of coordination.
From Tools to Dancing Partners
In my doctoral research on augmented physical objects with embedded technologies that sense, respond, and behave, I noticed that not all responsive objects are equally good at fostering meaningful interaction. Some were merely obedient: malleable, responsive to commands, but offering nothing back. Others were inner-driven: stubborn objects that optimized along their own predetermined dimension, blind to the user’s solicitations. The most interesting were good dancers: "autonomous yet responsive, ideal relational partners that share control and engage in dialogic give-and-take, with enough character to be genuinely interesting collaborators. (Nikolovska 2006, 49–51)
Three relational qualities distinguish objects worth thinking with from objects that merely respond. (Nikolovska and Ackermann 2006, 164) An intelligent tool has holding power: the ability to engage and sustain attention long enough for meaningful exploration to take place. It has transformative power: the ability to invite initiative rather than dependence, to let the architect in rather than performing for them. And it has reflecting power: the ability to open a space for contemplation, to help the architect see their own design process from a new angle. These are relational qualities, not features, and they describe the difference between a tool that processes commands and one that supports thinking.
This taxonomy maps onto the current landscape of AI tools for architects with accuracy. Most tools being built for the AEC industry today are obedient: powerful, waiting for instructions, doing exactly as commanded. Better tools, but still tools. Some optimize for energy performance or structural efficiency along a single axis, indifferent to the architect’s broader intentions, and one can argue are “inner-driven”. Very few are being designed as dancing partners.
The Good Enough Parent
There is a concept from developmental psychology that maps onto this problem with uncomfortable precision. Donald Winnicott, the British pediatrician and psychoanalyst, introduced the idea of the good enough mother (now more commonly referred to as the good enough parent) in work that has influenced developmental thinking for over half a century. (Winnicott 1971, 11)
The good enough parent is not a perfect parent. They do not anticipate every need, resolve every discomfort, or prevent every frustration. Instead, they provide what Winnicott called a holding environment: a space of sufficient safety and responsiveness within which the child can begin to encounter difficulty on their own terms. The parent initially adapts almost completely to the infant’s needs. Then, gradually, they introduce small failures. While these failures may be observed as negligence, they are in fact developmental. If the parent were perfect and if the child’s every need were met before it was even fully felt, the child would never develop an independent self. The sequence matters: containment first, then productive frustration within that containment. The child must feel held before the gap between desire and provision becomes a space for growth rather than a source of trauma.
Translating Winnicott’s framework to a building intelligence model, we arrive at the good enough intelligence: a system that provides a holding environment for the design process. It maintains structural integrity: the model will not let you design a building that collapses. It lets the architect know when it is uncertain. It does not fabricate. In a profession where buildings are life-safety infrastructure, candor about uncertainty is not a UX feature; it is a professional requirement. It offers relevant information when asked, asserts itself when warranted, flags problems. But it deliberately does not resolve every design problem it detects. It leaves room for the architect to struggle productively, because that creative struggle is where insight, exploration, and the design of great buildings live.
Consider the difference in practice. A conflict is detected between a beam and a duct route. The AI agent resolves it silently: it reroutes the duct, adjusts the ceiling height, updates the coordination model. The architect never knows the conflict existed. The good enough intelligence detects the same conflict and shows it to the architect, not as an error message, but as a spatial condition with implications. The beam is here because the span requires it. The duct is here because the mechanical room is there. The ceiling height matters because the room below is an auditorium that requires specific proportions for acoustic reasons. The architect sees the problem, understands its causes, and resolves it. In the process, they have an opportunity for a richer understanding of how structure, mechanical systems, and spatial quality interact. The next time a similar condition arises, the architect recognizes it. Not because the system flagged it, but because they lived through it.
Knowledge That Outlives Us All
The good enough intelligence is also about learning forward, not only about holding back.

From Bassett Jones's 1931 Architectural Forum article. Public domain.
Many moons ago, the architect and computational design pioneer Neil Katz described a problem that had nothing to do with software. One of SOM’s engineers was a man who had spent nearly four decades working on elevator shafts for supertall buildings. He was the authority who understood the specific interplay between elevator shaft geometry, structural core design, and tower floor plate efficiency. His knowledge was the kind of understanding that lives in the judgment calls made over thousands of hours of resolving specific problems in specific buildings. He was also approaching retirement. When he retired, Neil lamented, that knowledge would leave with him.

From Bassett Jones's 1931 Architectural Forum article. Public domain.
This is not an unusual story in architecture. Architectural knowledge has always been transferred through mentorship, the apprenticeship model, and the culture of a studio where junior architects absorb the senior architect’s way of seeing by working close enough to understand why and how they think. The knowledge is in the redlines, in the desk crits, in the offhand comment about why the detail fails in that climate.
AI offers something that has never existed before in architectural practice: the possibility of persistent and transferable knowledge without reducing it to rules. Nick Cameron, a principal architect and a Director of Digital Practice at Perkins & Will, described the work of Marty, a soon-to-be-retired colleague with a distinctive watercolor rendering style. In a follow-up ideation workshop at Motif we outlined an AI rendering agent we amicably named “Automate Marty.” This agent was not to be a generic rendering tool. It was to be a vessel for a specific person’s aesthetic sensibility, made available to the rest of the firm beyond the limits of that person’s time and availability.
But this raises a question the enthusiasm tends to suppress: what kind of knowledge representation is being used, and what does that choice cost?
Decades ago, George Stiny and William Mitchell analyzed Palladio’s villas and encoded the spatial relations, proportions, and compositional principles governing his designs as an explicit formal grammar. The grammar produced something no image corpus can: a representation with logic that is readable, debatable, and generative. It does not describe what Palladian buildings look like. It encodes how they are made, and makes that making inspectable. (Stiny and Mitchell 1978)
An AI agent trained on images of Palladian villas can reproduce their visual character, maybe. It cannot distinguish the spatial decisions that respond to the villa’s site from those that respond to its programmatic function. It cannot extend the logic deliberately into a new context while remaining accountable to the original. The aesthetic sensibility is captured as an opaque artifact rather than an intelligible language. The same is true of any rendering agent trained on Marty’s watercolors.
Yanni Loukissas, in All Data Are Local, provides the framework for understanding why this matters. Loukissas argues that data cannot be abstracted from the settings in which they were produced (Loukissas 2019, 33). Read against Marty: the institutions, instruments, and communities behind those watercolors were never captured. What buildings were these renderings made for? What client cultures, design phases, or kinds of spatial decisions were they meant to facilitate? What happens when the agent is applied to a project type for which the original watercolors were never intended? The AI agent cannot answer these questions, because the answers are not in the image data. They are in the context that produced the images, and that context was never captured. The HdM archive preserves exactly this context; a training corpus does not.
The Formation Problem
Those that hand-drafted worried CAD would destroy how architects design. The line never held, and the Luddite label has hardened into reflex: any critique of new tools reads as the latest iteration of an old complaint.
The claim that AI tools remove specific skills or jobs is inflammatory (daily LinkedIn posts about the end of design! AI ate it!) and misguided. It reduces design work to production only. Skills and jobs are rebuildable and reinvented, and sometimes new jobs emerge (BIM manager, or most recently, prompt engineer). The claim that holds is harder and narrower. AI tools remove the developmental process through which architects learn and evolve their thinking. CAD did not threaten that either.
Generative tools that front-run the reframe do. The argument holds even if the tools’ ceiling rises. It does not depend on AI being less capable than the architect. It depends on the architect’s capability being something that had to be formed, through encounter with difficulty, and that formation being what the tools now absorb. Ackermann is direct about this: learning is something that happens to the learner. A tool that removes the struggle removes the learning, whether or not it produces a good output.
The quiet displacement is not of the architect’s skills. It is of the conditions under which architects become architects.
The Paradox
The good enough intelligence can fail in two directions.
The first failure is the intelligence that does too much. When the system absorbs the labor of working through a problem, it absorbs the learning that the labor contained. The architect becomes a curator of machine output without the deep understanding that makes curation meaningful. The junior architect is the most vulnerable: if they never produce detailed architectural drawings because the system generates them, they may never develop an intuitive understanding of how buildings come together. Over time, the firm produces architects who can steer a system but cannot think without one. This is the automation complacency pattern documented by Parasuraman and Manzey across aviation, medicine, and human-robot interaction: the gradual erosion of the situational awareness and skill depth that makes human judgment meaningful when the system fails or produces an unexpected result. (Parasuraman and Manzey 2010, 381)
The stakes of this failure are not only epistemic. Architecture is among the most heavily legislated of the design professions. An architect stamps drawings with their professional license: a legal assertion of responsibility that no AI system can share. When an intelligent system silently resolves a coordination conflict or adjusts a clearance to meet code, it is making a decision the architect will be asked to defend. Can an architect who did not understand a decision meaningfully defend it? Maybe yes, maybe not. The good enough intelligence is a professional position as much as a pedagogical one.
The second failure is less obvious and more insidious. It is the intelligence that remembers too confidently. When a system accumulates knowledge from past projects and applies it to new ones, it risks calcifying institutional habits into algorithmic defaults. The way the firm has always done it becomes the way the system always proposes it. Not because it is the best approach, but because it is the most represented in the training data. An intelligent system that cannot surface these assumptions, and an architect who cannot contest them do not result in a knowledge system. It is a calcification engine.
What has been described here is a paradox at the heart of any sincere attempt to build intelligence into design tools. Greater capability demands greater restraint. Fuller memory demands greater humility. These are not limitations of the system; they are its design.
The good enough intelligence does not merely hold back. It holds back in specific places, for specific reasons, calibrated to an architect, a problem, and a specific moment in the design process. In the early stages, when the design is loose and the architect is still constructing an understanding of the problem, the AI offers less: more questions than answers, more possibilities than proposals. As the design matures and the architect’s intent becomes clearer, the AI can offer more, because the architect now has the understanding to evaluate what the system proposes.
Simultaneously, the AI accumulates. It learns what the architect values, how the firm approaches problems, what patterns recur across projects. But it holds this knowledge in a form whose provenance remains visible, not as opaque statistical tendencies but as inspectable, contestable understanding that the architect can interrogate and override.
The measure of the tool is not only what it produces. It is also how it empowers the architect who uses it, and what kind of practice it makes possible across generations. Calibrated restraint, legible knowledge, and contestable provenance are not three separate design requirements. They are one requirement seen from three angles: what it takes to keep architectural practice constructive across the generations that will use the tool. A model worth building helps the architect think more clearly, preserves the productive struggle of design while eliminating the unproductive friction of coordination, accumulates institutional knowledge without calcifying institutional habit, and keeps the provenance of its inheritance visible and contestable. Such a model is not a tool. It is a dancing partner — in Ackermann’s sense, a thing to think with; in Winnicott’s sense, good enough. The shelves of the HdM archive are full of such things. The question for the next generation of intelligent design tools is whether they can be too.
References
Ackermann, Edith. Piaget’s Constructivism, Papert’s Constructionism: What’s the difference? Geneva: Research Center in Education, 2001.
Knight, Terry W. Applications in Architectural Design, and Education and Practice. Report for the NSF/MIT Workshop on Shape Computation. MIT, 1999.
Loukissas, Yanni. All Data Are Local: Thinking Critically in a Data-Driven Society. MIT Press, 2019.
Nikolovska, Lira. Physical Dialogues with Augmented Furniture. PhD dissertation, MIT School of Architecture and Planning, 2006.
Nikolovska, Lira, and Edith Ackermann. Exploratory Design, Augmented Furniture: On the Importance of Objects’ Presence. Kluwer Academic Publishers, 2006.
Parasuraman, Raja, and Dietrich H. Manzey. Complacency and Bias in Human Use of Automation: An Attentional Integration. Human Factors 52, no. 3 (2010): 381–410.
Stiny, George, and William J. Mitchell. The Palladian Grammar. Environment and Planning B 5 (1978): 5–18.
Winnicott, Donald W. Playing and Reality. Tavistock Publications, 1971.

.png)
