April 3, 2026
by 
Lira Nikolovska
Industry
Insights
In this Article

Building Intelligence, Not Building Information

One word changes in the name and everything downstream shifts.

When Building Information Modeling becomes Building Intelligence Modeling, the difference is not branding. It is a design manifesto, and, less comfortably, a political act. The renaming declares that the central object of architectural practice is no longer a structured repository of geometry and specifications, queried by humans, populated by humans, interpreted by humans, but something that reasons about what it contains. A system that anticipates, suggests, warns, generates, collaborates.

If that sounds like a modest improvement – artificial intelligence as a smarter assistant, a better autocomplete – the argument of this essay is that the opposite is true. The moment the model reasons, the nature of the model changes. The nature of the work changes. And the nature of the architect’s role changes with it. But this shift is not only a gain. It is also a loss, and any honest account of the transition must reckon with both.

Function Follows Intelligence

William J. Mitchell, the late dean of MIT’s School of Architecture and Planning, and one of the very few thinkers who held architecture and computation in both hands as professional practice, not as metaphor, saw this coming decades before the technology.

Sullivan gave us form follows function. Mitchell inverted it: function follows code (Mitchell 1999, 49-50). If the software determines what a building can do, then the building’s capabilities are shaped not by the architect’s hand alone but by the logic embedded in the system. Extend this one step further and you arrive at the present condition: function follows intelligence. When AI writes and modifies code in real time based on design intent, structural efficiency, energy performance, daylighting, and egress stop being static specifications locked at the end of design development. They become living parameters, responsive to every change in the model.

Mitchell’s deeper contribution is not about performance optimization. It is about organization. He argued that the fundamental shift in how we inhabit the world was a move from boundaries to connections as the organizing principle. Traditional BIM is organized around boundaries: walls, floors, property lines, discipline separations, file boundaries between consultants. The architectural model lives in one file, the structural model in another, the MEP model in a third. The boundaries are not incidental: they are the architecture of the tool itself.

An intelligent model should be organized around connections – relationships between spaces, flows of people and air and light, interdependencies between systems, links between design decisions and their downstream consequences. This is not an interface argument. It is a data model argument. And it has profound implications for how an “AI-first BIM” must be architected from the ground up,  because it is far harder to bolt connection-logic onto a system whose bones are made of boundaries.

The paradigm shift this represents is of the same order of magnitude as the move from command-line drafting to direct manipulation. Except this time, the system is not waiting for instructions. It reasons toward the architect’s intent and, critically, makes that reasoning visible. The architect does not evaluate a finished proposal. The architect engages with the system’s logic, tests assumptions against spatial intuition, adjusts the intent in response to what the intelligence reveals about the problem’s structure. Understanding is constructed through the exchange, not delivered before it. The temptation will be to bolt a conversational layer onto an existing 3D modeling environment and call it AI-first. That fails the test. Intelligence woven into the surface of an old paradigm is still an old paradigm.

The Ontology Changes

Here is the part that matters most.

An intelligent system needs richer semantic structures than an information system. A traditional BIM model knows that a wall is a wall – it stores geometry, a material specification, a fire rating. An intelligent model needs to understand not just what a wall is but why it is there, what it is for, and how it relates to human experience, regulatory frameworks, material performance, and design intention – simultaneously. The wall is not a geometric plane. It is a mediator between thermal zones, acoustic environments, visual fields, and social spaces. The moment the model is asked to reason, the poverty of its existing ontology becomes visible.

Bruno Latour, the French philosopher who spent his career dismantling the neat separations between nature and culture, human and nonhuman, would recognize this problem immediately. A building model that reasons is what he would call a hybrid object – not purely technical, not purely social, but both at once. Traditional BIM pretends the model is a neutral container of facts. Latour would say: nothing is neutral. The model is an actor. It shapes outcomes. It constrains and enables. Denying this does not make the tool objective. It makes the politics of the tool invisible.

This is where the argument must get uncomfortable, because Latour’s challenge does not stop at legacy BIM. It applies with equal force to the intelligent model being proposed here. An AI-first BIM is not a neutral amplifier of the architect’s intent. It is trained on data that encodes assumptions about what buildings are for, whose comfort matters, which performance metrics are optimized and which are ignored. The intelligence in the model is not a mirror. It is an argument – and like all arguments, it has a position, whether it declares one or not.

What Latour called the parliament of things (Latour 1993, 142) – a forum where nonhuman actors are given representation in collective decision-making – is almost a literal description of what an intelligent building model should do. The AI gives voice to actors that cannot speak in a traditional design review: the future occupant’s thermal comfort, the structural member under stress, the carbon footprint of a material choice, the maintenance worker who will need access in twenty years.

But a parliament requires scrutiny of its own rules. Yanni Loukissas, in All Data Are Local, provides the sharpest methodology for that scrutiny. His central argument is that all data are indexes to local knowledge, inseparable from the specific settings in which they were produced (Loukissas 2019, 33). Every corpus has a provenance, and that provenance is not incidental to what the system knows; it is what the system knows. An AI-first BIM trained predominantly on large-scale, North American and European commercial construction carries that corpus into every recommendation it makes about structural bays, material selection, and thermal performance. The question of what the intelligence optimizes for – whose definition of performance, whose model of comfort, whose building culture – is a data politics question before it is a design question. The honest version of the AI-first argument does not pretend this problem away. It insists that the provenance of the model be made legible: visible to the architects who use it, contestable by the communities affected by it. This is not a caveat. It is a design requirement.

What the Paradigm Costs

Mario Carpo, in The Second Digital Turn, draws a distinction between two computational paradigms in architecture. The first digital turn – parametric design, scripting, smooth surfaces – amplified the architect’s authorial control. The second, driven by machine learning, does something different: it produces formal and spatial outcomes that no human authored and no human can fully explain. The architect evaluates results whose logic is opaque.

This is not the same as saying the architect evaluates results whose quality is opaque. An architect can recognize a good plan without understanding the algorithmic path that produced it, just as a musician can recognize a good melody without understanding the mathematics of consonance. But the inability to trace the reasoning – to understand why the system proposed what it proposed – represents a genuine loss of a particular kind of knowledge. The feedback loop between intention, execution, and understanding that has defined design practice for centuries is partially broken.

There is something else lost, harder to name. The productive friction of working by hand – the resistance of the material, the slowness that forces reflection, the tacit knowledge built through ten thousand hours of manually resolving details. When an intelligent system absorbs the coordination labor that currently consumes so much of architectural practice, it also absorbs some of the embodied learning that comes from doing that labor. The junior architect who never routes a duct because the system does it may be freed to think at a higher level of abstraction. Or may be deprived of learning something essential about how buildings work. Likely both, in proportions that will take a generation to understand.

What the Architect Becomes

Phil Bernstein – who spent two decades at Autodesk shaping BIM strategy before returning to Yale to teach – documents the seismic shifts already underway in the architect's role and responsibilities (Bernstein 2018, 39, 91). The architect’s role changes from author to curator.

Whether this is an elevation or a diminishment depends entirely on what curation means in practice. If it means the architect exercises the same depth of spatial, material, and experiential judgment but is freed from the mechanical labor of representation, the role is genuinely amplified. If it means the architect becomes a reviewer of options generated by a system whose reasoning is opaque and whose biases are invisible, the role is hollowed out –prestigious in title, diminished in substance.

The difference between these two outcomes is not determined by the technology. It is determined by the design of the tool. An AI-first BIM that treats the architect as a sophisticated collaborator – surfacing its reasoning, making its assumptions contestable, supporting the progressive development of design intent from vague to precise – produces one kind of practitioner. A system that presents finished options for approval produces another.

Antoine Picon has argued that every new technical capability in architecture does not merely change what architects do – it changes what architects are (Picon 2010, 104–106). The drawing did not just make design more efficient; it produced a new kind of design subject, one who thinks through projection, abstraction, orthographic convention. CAD produced another. BIM another still. The question is not whether an intelligent model will reshape the architect's subjectivity (it will!) but what kind of subject it produces, and whether that subject retains the spatial, material, and experiential sensibilities that make architecture a discipline rather than an optimization problem.

Where This Leads

What has been described here is not an incremental improvement to a familiar tool. It is something categorically different: not a 3D modeler with AI features attached, not a chatbot that generates building geometry, but an intelligent design environment – a space where human spatial intelligence and machine analytical intelligence collaborate in real time, each shaping the other in ways that are not fully predictable in advance.

The representation evolves along the way. In the earliest design phases, the model is loose, diagrammatic, more intention than geometry – and the intelligence should be comfortable reasoning at that level of abstraction. As the design matures, the model gains precision, and the AI’s contributions become more specific, more technical, more constrained by codes and materials and budgets. This progressive formalization – from sketch to scheme to specification – should be a first-class concept in the tool’s architecture. Not something the architect manages by switching between different applications.

Picon's analysis of digital design procedures points toward this condition: digital tools have fundamentally altered the architect’s relationship to time, collapsing the distance between conception and production, making design a continuous process rather than a sequence of discrete deliverables. An intelligent model intensifies this further. If the model reasons in real time against living performance parameters, the building does not become a fixed object when construction documents are issued. The intelligent model can continue to reason during construction, during commissioning, during occupation: adjusting, learning, updating its understanding of the building it describes. The question of when design ends becomes genuinely unclear. And with it, the question of who the author is, and what authorship means for an artifact that never stops being authored.

The renaming was the first design decision. Building Intelligence, not Building Information. But a name is a claim, not a guarantee. Everything that follows – the data model, the interaction patterns, the relationship between human judgment and machine reasoning, the politics of whose priorities the intelligence serves, the kind of architect the tool produces – will determine whether the claim is earned.

References

Bernstein, Phil. Architecture | Design | Data: Practice Competency in the Era of Computation. Birkhäuser, 2018.

Carpo, Mario. The Second Digital Turn: Design Beyond Intelligence. MIT Press, 2017.

Latour, Bruno. We Have Never Been Modern. Translated by Catherine Porter. Harvard University Press, 1993.

Loukissas, Yanni. All Data Are Local: Thinking Critically in a Data-Driven Society. MIT Press, 2019.

Mitchell, William J. City of Bits: Space, Place, and the Infobahn. MIT Press, 1995.

Mitchell, William J. e-topia: Urban Life, Jim – But Not as We Know It. MIT Press, 1999.

Picon, Antoine. Digital Culture in Architecture: An Introduction for the Design Professions. Birkhäuser, 2010.

Picon, Antoine. Ornament: The Politics of Architecture and Subjectivity. Wiley, 2013.

Lira Nikolovska

Lira Nikolovska is a trained architect and founding designer at Motif Systems. She holds a PhD in Design Computation from the MIT School of Architecture. Her work is at the intersection of spatial intelligence and AI-first design tools for architects.