The question China asked
When Beijing regulates how AI systems present themselves, banning anthropomorphism, many Western commentators see authoritarianism. They’re not wrong – but they’re looking at the wrong thing.
China’s draft regulations on AI anthropomorphism – prohibiting systems from simulating human emotions, claiming human identity, or implying they possess consciousness – generated predictable reactions. Civil liberties concerns on one side, geopolitical suspicion on the other. What received far less attention was the underlying premise: that how an AI system presents itself is a governance question, not a product decision. That the relationship a system invites users into is not incidental to its design but constitutive of it.
Simon Wardley noticed something adjacent in China’s AI governance framework that deserves closer reading. Beijing’s requirements – that training data reflect traceable, auditable sources and that the values embedded in systems be legible, accountable and in line with the prevailing orthodoxy – are not primarily about censorship, though they serve that purpose too. They reflect an understanding that sovereignty over AI has almost nothing to do with where models are hosted or where data is stored, and everything to do with what happens from inception: what the system was trained on, what values were embedded in its design, what relationship with users it was built to cultivate, and by whom, for whose benefit.
This is where the Western debate has been conducting itself in the wrong register. The argument about data residency – keep European data on European servers, run models in national clouds – addresses a territorial concern while leaving the substantive governance question entirely untouched. A model trained on data selected by a platform optimising for engagement – a.k.a. captive audiences and addiction – and designed to present warmth and apparent care because retention metrics reward it, does not become a different kind of system because it runs on servers in Frankfurt. The values are not in the infrastructure and its location. They are in the choices made before the first training even took place, before a single parameter was set.
For universities and educators, this matters in ways that the current debate about AI in education almost entirely misses. The dominant controversy – should students be allowed to use AI, how do we detect its use, what counts as academic integrity – treats these tools as calculators – essentially binary passive devices under total control of their user locally and without major remote interference – with a basic interface. That analogy was always imprecise, but it has now become actively misleading. Calculators have no embedded worldview. They were not designed to be your intellectual companion. They do not present themselves as understanding you, caring about your development, or having a perspective on the subject matter. These systems are. And that design choice was made by organisations whose interests are not, structurally, aligned with learning, particularly since learning also means acquiring agency and autonomy – if not total independence, depending on the discipline.
This does not mean AI tools have no place in education – the case for them as cognitive amplifiers remains sound as researchers like Ethan Mollick have shown. It means that integrating them without examining the foundations, methods, data and means with which they were built is not pragmatism or “agility”. It is a form of institutional negligence dressed as openness to innovation. And it is a conflation of technical innovation with progress, a substitution of engineering application for scientific research and a replacement of free minds by productive bodies. When a university adopts an AI teaching assistant, it is not simply acquiring a tool. It is inheriting an entire chain of choices – about training data, about what relationship the system is designed to cultivate, about what behaviours it optimises for and why – that were made elsewhere, by people with different objectives, accountable to different interests in countries with far weaker governance of the commons.
China answered those questions. Badly by the standards of societies committed to liberty and the rule of law: with state control, mandatory ideological conformity, and surveillance infrastructure baked into the governance framework. That answer is not available to, and should not be desired by, democratic societies. But the fact that Beijing answered in a questionable way does not mean the question was wrong. It means we need a different answer and that requires, first, acknowledging that the question exists.
What would it mean for a university to actually inspect the value chain of the AI tools it integrates? What choices were made in training, and by whom? What relationship is the system designed to cultivate, and what does that serve? Who decides about how the system evolves and who appropriates the results? These are not technical questions requiring specialist expertise. They are governance questions – the kind institutions ask – or should ask – about every other vendor relationship with significant stakes attached. That is one reason why the governance question extends to procurement: are institutions actively creating the conditions for open source alternatives to compete?
Calculators didn’t need a governance framework, although they had to be regulated e.g. for exams. Companion tools with AI capabilities do need significant efforts in building and maintaining an appropriate governance involving all stakeholders. Whether the tools we are integrating into education are one or the other is not a question the technology providers and platforms will answer voluntarily.
Who, then, is asking?


