The Coasean Singularity Delusion: Why AI Won't Dissolve Firms Into Frictionless Markets
An NBER paper suggesting AI agents would render transaction costs negligible –thus cancelling the logic that justifies firms – caught our attention. In short, it's theology dressed in mathematics.
A new NBER working paper by Shahidi, Rusak, Manning, Fradkin, and Horton presents a vision of AI agents transforming digital markets by “dramatically reducing transaction costs.” Reading through their analysis, one encounters a familiar pattern: economists discovering the surface of technology, extrapolating efficiency gains to infinity, and arriving at conclusions that would make a physicist weep.
Their central thesis rests on Coase’s insight that transaction costs explain firm boundaries. Since AI agents can search, negotiate, and contract at near-zero marginal cost, they suggest we’re heading toward fundamental restructuring of market organization—what I’ll call their implicit “Coasean singularity,” where transaction costs drop so low that the logic justifying firms becomes obsolete.
We’ve been here before.
The E-Commerce Déjà Vu
In the early 2000s, similar prophets proclaimed e-commerce would eliminate “friction” and deliver us to a “frictionless economy.” Management consultants at prestigious firms assured their banking clients that physical branches were dinosaurs destined for extinction. The logic was impeccable: why maintain expensive real estate when customers could transact online at negligible cost?
Fast forward to 2025. Friction hasn’t disappeared—it’s migrated to different parts of the value chain. Last-mile delivery logistics. Returns management. Customer service for complex products. And crucially, those physical bank branches? Still here. Still necessary. Because certain transactions requiring human perception, expertise, relationship-building, and trust cannot be replaced by clicking buttons on screens.
The NBER authors make precisely the same category error, just with shinier technology. They acknowledge some new frictions (congestion, price obfuscation) but treat these as manageable externalities rather than fundamental features of how complex systems actually work.
What Coase Actually Taught Us (And What Economists Forgot)
Here’s what’s maddening about invoking Coase to predict firm dissolution: Coase himself understood that transaction costs were just one reason firms exist. The real picture is far richer.
Firms exist because:
Knowledge integration requires sustained collaboration that markets cannot coordinate
Complementary assets need protection from holdup problems
Dynamic capabilities emerge from organizational learning, not spot transactions
Relational contracts depend on repeated interaction and shared context
Power and coordination matter in ways that cannot be reduced to “friction”
Beyond the work of Coase on the factors making firms necessary in the economy, we can also turn to other economists like Elinor Ostrom who examined how common pools of resources can be managed, challenging the idea of inevitable demise through the proverbial “tragedy of the commons”. Indeed, if we consider the whole economy as a commons – a giant pool of resources being managed collectively by those active within it – then the mechanisms that emerge from that complex system, including firms and other collectives, are not inefficiencies to be optimized away but institutional solutions whose form follows their function. Ostrom, in her research on commons governance, demonstrated that successful collective action requires institutions, rules, boundaries, and trust that emerge from sustained relationships—precisely what firms provide. You cannot replace this with efficient API calls between AI agents, no matter how sophisticated your natural language processing.
The paper treats trust as if it’s merely an information problem. Use AI to verify identity! Deploy cryptographic proofs! But trust isn’t about perfect information—it’s about accountability, governance, and the capacity to sanction bad actors. AI agents introduce new principal-agent problems. Who controls the agent? Whose interests does it serve? How do we audit its decisions? These questions require organizational structures, not better algorithms.
The Material Reality Economists Ignore
But the deepest flaw runs even further. There is a breathtaking passage where the authors write: “The activities that comprise transaction costs—learning prices, negotiating terms, writing contracts, and monitoring compliance—are precisely the types of tasks that AI agents can potentially perform at very low marginal cost.”
Very low marginal cost.
Let us be precise about what “low marginal cost” AI actually requires:
Data centers consuming electricity equivalent to small nations. Ireland’s data centers already account for 21% of national electricity demand. By 2030, AI infrastructure could consume 10% of total U.S. electricity.
Water for cooling—millions of gallons per data center annually. In water-stressed regions, this directly competes with human needs and agriculture.
Rare earth minerals for chips and hardware, extracted through environmentally devastating mining processes, often in geopolitically fragile regions with minimal labor protections.
Electronic waste from hardware obsolescence. The current AI boom is accelerating replacement cycles, generating mountains of toxic e-waste.
Embodied energy in manufacturing semiconductors—some of the most energy-intensive industrial processes humanity has invented.
One needs to be an economist with no understanding of matter, energy, thermodynamics, or planetary boundaries to imagine we could reach a “singularity” beyond which Coase’s insights become obsolete. We live in a finite world governed by physical laws. Computation requires energy. Energy transformation generates entropy. Resources deplete. These constraints don’t disappear because Silicon Valley built a better autocomplete.
The authors mention “compute costs” exactly once, treating it as a pricing variable rather than a biophysical constraint. This is economics in a bottle—models that work beautifully on whiteboards but collapse when they encounter reality.
The Real Transformation (Which Economists Won’t Like)
Here’s what will actually happen as AI capabilities expand:
Friction migrates, doesn’t disappear. Just as e-commerce moved friction from browsing to logistics, AI will shift costs from some transaction activities to others—verification, governance, dispute resolution, dealing with AI-generated spam and gaming.
Trust problems multiply. Every AI agent introduces new information asymmetries and alignment challenges. Who audits the agents? How do we prevent manipulation? These require human judgment and institutional oversight—exactly what firms provide.
Rebound effects dominate. Lower costs don’t necessarily mean lower total impact. Cheaper AI negotiations might flood markets with applications, creating congestion that requires new filtering mechanisms (which impose their own costs and biases). The paper acknowledges this for job applications but treats it as a curiosity rather than a fundamental dynamic.
Physical constraints bind harder. As AI use scales, energy and material limits will impose increasingly severe costs. These won’t show up in your transaction cost calculations until they manifest as price spikes, rationing, or regulatory intervention.
New organizational forms emerge—but they’re still firms. Companies will restructure around AI capabilities, but they’ll remain organizations with boundaries, governance structures, and collective capabilities. The “make or buy” boundary shifts; it doesn’t evaporate.
Economics Without Physics Is Just Politics and Storytelling
The broader problem this paper exemplifies is mainstream economics’s persistent denial of biophysical reality. When your models treat computation as costless information processing divorced from its material substrate, you can generate any conclusion you want. It’s the same intellectual move that gives us infinite growth models on a finite planet, externalization of environmental damage as “market imperfections,” and the persistent fantasy that technology will deliver us from resource constraints through pure cleverness.
This isn’t science. It’s theology dressed in mathematics.
Real science acknowledges constraints. Thermodynamics. Materials science. Ecology. Systems dynamics. These disciplines understand that you cannot optimize around physical limits—you must design within them. Circular value networks. Life-cycle assessments. Planetary boundaries frameworks. These are the tools we need.
Instead, we get papers celebrating AI’s potential to reduce transaction costs while ignoring that the infrastructure enabling those transactions is accelerating us toward ecological catastrophe.
What We Should Be Asking Instead
If we took biophysical constraints seriously, here are the questions economists should investigate:
What is the energy and material throughput per AI-mediated transaction compared to human-mediated alternatives?
At what scale do energy demands from AI infrastructure create systemic risks to grid stability and climate goals?
How do we design economic institutions that internalize the full life-cycle costs of AI systems?
What governance structures ensure AI deployment serves genuine human needs rather than generating artificial demand?
How do we prevent AI capabilities from being captured by concentrated economic power?
These questions don’t appear in the NBER paper because they would undermine its central narrative. They require acknowledging that technology choices have consequences, that efficiency isn’t the only value that matters, and that markets embedded in physical reality face constraints that no amount of computational power can dissolve.
Conclusion: Singularities and Other Fairy Tales
The concept of a “singularity” – whether Coasean or technological – is revealing. It describes a point where models break down because they’ve been extrapolated beyond their domain of validity. Physicists use it to mark the boundaries of their theories, admitting ignorance. Economists and technologists use it to mark the arrival of transcendence, claiming knowledge of transformation so profound that existing rules no longer apply. Maybe principally because ignoring those rules is a fundamental prerequisite for kicking the can down the road.
The difference is instructive.
This paper offers sophisticated analysis of how AI might reshape specific market mechanisms. Some of those insights are valuable. But wrapped around that analysis is a story about frictionless efficiency and dissolved organizational boundaries that ignores material reality, misunderstands what firms actually do, and extrapolates marginal cost curves into fantasy futures.
We don’t need more papers celebrating AI’s potential to optimize markets. We need economics that acknowledges we’re running planetary systems on finite energy and materials, that efficiency gains often generate rebound effects that swamp direct benefits, and that the most important question isn’t “how can AI reduce transaction costs” but “what kind of economy can survive within biophysical limits while providing decent lives for billions of people?”
That question requires different tools than the ones deployed here. It requires systems thinking, ecological economics, commons governance principles, and serious engagement with the material basis of all economic activity. It requires, in short, abandoning the fantasy that clever algorithms can dissolve the constraints that actually matter.
The Coasean singularity isn’t coming. But the planetary boundaries crisis is already here. Perhaps economists might consider which problem deserves their attention.

