Someone else's choices
Meta is mass distributing its Ray-Ban glasses. What do they reveal about the questions we fail to ask? What design choices do we condone the moment we accept licensing terms online?
When a data annotator in Nairobi tells Swedish journalists “we see everything – from living rooms to naked bodies,” he is not describing a malfunction. He is describing the pipeline. A joint investigation by Svenska Dagbladet and Göteborgs-Posten has documented what Meta’s terms of service obscure in a single carefully worded clause: intimate footage captured by Ray-Ban AI glasses – people undressing, using bathrooms, accidentally filming their bank cards – is routed to Sama, a subcontractor in Kenya, where workers earning wages far below those of the markets the product is sold in label it to train Meta’s AI systems. Anonymisation tools fail in difficult lighting. Faces that should be blurred are sometimes fully visible. Workers who raise concerns are dismissed. Meta, after two months of no response to journalists, referred them to its privacy policy.
This is not primarily a privacy scandal. It is a value chain made visible.
The glasses are marketed with the language of intimacy and capability: an AI assistant on your face, seeing what you see, helping you navigate the world. That framing is not accidental. AI systems that present themselves as companions – attentive, responsive, apparently caring about your experience – generate stronger retention, more interactions, more data, and deeper resistance to switching to alternatives. The warmth is a product decision with measurable commercial returns. It is designed to manufacture a relationship in which the user feels served, while the operational logic running underneath extracts value from everything the relationship generates: the footage, the transcriptions, the behavioural data, the training labels produced by workers in Nairobi at $2 an hour.
You can scrutinise a tool. You feel disloyal scrutinising a companion or you might not even consider scrutinising “someone” in your circle of trust. That asymmetry is not a side effect of good product design. It is the point. It is the goal and it has a name: manufactured addiction and engineered helplessness.
What makes the Ray-Ban case structurally significant – beyond its specifics – is that it demonstrates what brand authority does to governance. Ray-Ban carries decades of cultural weight: freedom, authenticity, style. Meta has 3 billion users across its platforms with names – WhatsApp, Instagram, Messenger, Facebook, Threads – that have become household naturals, whether we like it or not. When these brands co-produce a wearable AI device and distribute it through mainstream retail channels, they are not merely selling a product. They are normalising a set of practices – pervasive recording, intimate data extraction, offshore annotation labour – that most users would find unacceptable if described plainly on the packaging. The brand trust is doing governance work in the stead of the citizens and their elected representatives, of the regulators and authorities formed by human societies to govern the commons. Corporates define rules that no democratic institution designed or approved, including by deciding single handedly on how their algorithms should function – e.g. banning famous works or art with nudes, while allowing, and sometime actively promoting, profanities by populist candidates as though freedom of speech was superior to creative liberty. The LED indicator on the frame, officially described as “the privacy feature”, is carrying a weight it was never built to bear. It was never designed as a control or governance mechanism. It is merely legal defense.
This whole mechanism that eliminates mindfulness and discernment is the structural condition that the anthropomorphism regulation examined in The question China asked was, however imperfectly, attempting to address. It is also the most plausible explanation for the regulations enforced by China regarding Douyin, the version of Tiktok brough to market by ByteDance for mainland China, long before the ban on anthropomorphism.
This pattern has a parallel in the social media debate that is worth examining. When critics argue that age restrictions on social media are the wrong intervention – that the real problem is algorithmic design, not access – they are technically not wrong. Toxic algorithmic mechanics do cause measurable harm, and blunt access restrictions have genuine costs. But the argument, however sophisticated in its technical register, produces a conclusion that happens to be exactly what platforms require: no restriction, regulatory focus on mechanisms the platforms control, and implicit trust that they will reform the thing they profit from not reforming. The absence of a governance arrangement capable of enforcing algorithmic reform is not addressed. The political economy of the argument, i.e. who benefits from it going unanswered, is not examined.
The same logic applied to wearable AI says: don’t regulate the glasses, fix the anonymisation. Fix the labour conditions. Fix the terms of service language. Each of these is a reform the platform can manage, pace, and define on its own terms – while the underlying extraction model continues unchanged.
Physical distribution networks for connected devices are governance infrastructure, whether they are treated as that or not. A wearable sensor in several million homes, activated by brand trust, governed by terms of service buried in a privacy policy, generating training data reviewed under NDA in a country with no EU adequacy decision – this is not a product category that self-regulates to acceptable outcomes. The evidence is already in.
What relationship are these systems actually designed to cultivate and for whose benefit? Who bears the cost when that design fails? And what would it mean to treat the distribution of connected devices with embedded AI the way we treat the distribution of other products capable of significant harm? This is not as a question for terms of service, but as a question for governance.
The annotator in Nairobi already knows the answer to the first question. The rest of us are still pretending it hasn’t been asked.


