Semblance of performance
In AI assisted work the volume and apparent quality of output are often only very thin layer of varnish. The experienced eye identifies it as the semblance, not the substance of performance.
There is a version of AI-assisted work that looks, from the outside, like a substantial upgrade in capability. Output quality is up. Turnaround is faster. The worker is more engaged with more content than before. In fact the worker may feel more capable than before in discharging their duties.
Look more carefully and something else is happening. The worker is buried deeper in material they didn’t choose, moving at a pace they didn’t set, towards unknown goals, dependent on tools they cannot always fully audit, producing work whose foundations they only partially understand. Their comprehension is no longer theirs and their mind needs the crutches of the machine just to keep up with the volume of output. They are more intensely occupied and less in control, at the same time. The tool has extended their reach while narrowing their agency.
This is not an edge case. It is the dominant pattern of AI adoption in knowledge work, and conventional productivity metrics cannot see it, because what’s being degraded doesn’t show up in output until well after the damage is done.
The framework that makes this visible comes from an unlikely source.
Manfred Max-Neef, the Chilean economist, argued that fundamental human needs – Understanding, Freedom, Participation, among others – are finite and universal. This means that they are valid for all humans across cultures and geographic boundaries and that there is a level of sufficiency after which they are satisfied. What varies though is the satisfiers: the means and arrangements we use to meet our needs.
Some satisfiers are synergistic: meeting one need enriches the capacity to meet others. For example, when we fulfil the need of Subsistence – maybe via the satisfiers of food, water and shelter – we are in a better position to tackle the need of Participation or Understanding. That is an explanation for the inability or entire social groups to engage with complex topics of collective interest in certain countries. For example, in the USA workers who need two or even three jobs just to pay the bills and who hardly have the time to properly relate to their children and family will not easily engage in debates over social media about this or that topic. Nor will they easily find the time to vote.
On the other hand, some satisfiers are destructive: appearing to meet a need while undermining the conditions for meeting it in future. For example, heavy SUVs appear to meet the need for Freedom and Participation in social life while degrading the conditions for future Subsistence, while commercial television appears to satisfy Leisure while simultaneously degrading Understanding, Identity, and Creation – incidentally these are the same destructive patterns as those we have seen unfold over the past 25 years, with the commercial walled garden Internet.
What AI-intensive work, in its dominant configuration, produces is something the framework implies but Max-Neef never named: a synergistic dissatisfier. An arrangement that degrades the individual’s ability to fulfill multiple fundamental needs simultaneously, where the degradations compound rather than merely accumulate. In great part this stems from the manufactured disability of comprehension, a structurally produced addiction to the speed and ease afforded by the tools and the narrowing of agency as a consequence of the first two factors.
The cascade runs as follows. Outsourcing comprehension – routing AI output around the human’s understanding rather than through it – erodes Understanding directly. The acceleration this enables compresses the time available for the free enquiry, reflection and discernment that Understanding requires. The dependency that follows – needing the tool to maintain the pace the tool created – reduces Freedom, the capacity to act according to one’s own judgment. The inability to steer or shape the conditions of one’s work hollows out Participation. Less comprehension produces more dependency, which produces less freedom to resist the pace, which leaves less room for comprehension. These needs don’t fail sequentially, as if addressing one would restore the others. They’re simultaneous and non-substitutable — which means the degradation cannot be compensated by partial recovery.
The bifurcation that actually matters is not behavioral.
It is not about how actively or passively a worker engages with the tool – the individual discipline framing that most commentary defaults to. It is architectural: does the design route comprehension through the human or around them?
The same tool can do either. AI used to generate a first draft that the writer must genuinely interrogate, extend, and revise – producing understanding they didn’t have before – is amplifying comprehension. AI used to generate a final draft that ships with minimal engagement is outsourcing it. The output may be indistinguishable. The effect on the need matrix is not.
The constructive orientation is not “use it less.”
That is a response to the wrong diagnosis. The question is what organizational arrangements actually produce the amplification pattern rather than the outsourcing one – and who decides.
The honest answer is that amplification of comprehension costs something in the short term. It is slower. It requires that workers have room to not accept the first answer, to push back on confident errors, to treat the tool’s failures as the most instructive thing it produces. Organizations optimizing for output legibility – the number of PRs, the documents shipped, the tickets closed – will not organically generate those conditions. The incentive architecture selects for outsourcing.
This makes it a question of sense-making before it is a question of governance – and governance becomes possible only once you can see what you are actually managing. The question is one of discernment, not of raw productivity. What is the work for? What capacity are we trying to sustain, and over what time horizon? Who is in the room when the adoption decisions are made – and do they have standing to name what is being traded away?
The synergistic dissatisfier and associated loss of comprehension are already installed in most organizations. The question is whether anyone has noticed what they are doing to the matrix of human needs – and whether that question is even being asked.
What would AI adoption designed around amplifying comprehension rather than simulating it actually require in your context? And whose interests are served by the question staying vague?


