China and the US are engaged in a competitive AI race, with each currently leading in different aspects of artificial intelligence development. — China and the US are engaged in a competitive AI race, with each currently leading in different aspects of artificial intelligence development.
The workers at the Beijing Institute of Technology’s AI ethics lab reported last autumn that their research on bias mitigation in facial recognition systems was routinely deprioritised in favour of performance benchmarks - accuracy, inference speed, model size - measured in isolation from social context. Their supervisor, when pressed, cited “national strategic priority” as justification. Not cruelty, but distance: the assumption that technical excellence, unmoored from lived experience, is itself an ethical framework.
This is not an isolated case. In Shenzhen, engineers at a leading surveillance-tech firm described a feedback loop where community complaints about over-policing in migrant districts were logged but never fed back into model training data - because the data pipeline was designed for predictive efficiency, not civic responsiveness. The assumption was not that the complaints were false, but that they were irrelevant to the model’s objective function. The objective function, in turn, was defined in a boardroom fifty miles from the districts it governed.
In the United States, the pattern is not mirrored but inverted: the problem is not the suppression of ethical review but its fragmentation. At MIT’s Media Lab, researchers have long collaborated with community organisations on participatory design projects - only to find their prototypes abandoned when scaled by municipal governments, who lack the institutional memory or political will to sustain the co-creation process. The failure is not technical but civic: the assumption that once the tool is built, the community will adopt it, rather than the reverse.
What we call a “race” in AI is in truth a contest over where knowledge resides. China’s strength lies in centralized data collection and rapid deployment - its systems are trained on the scale of entire provinces, not individual wards. The United States’ strength lies in decentralized experimentation - its systems are tested in diverse civic settings, but rarely scaled with fidelity to those settings. One leads with breadth, the other with depth. Neither is sufficient.
The settlement method teaches us that policy built on abstract categories - efficiency, innovation, security - will always stumble over the concrete. A facial recognition system that identifies faces at 99.8% accuracy may still misidentify Black women twice as often as white men - not because the algorithm is flawed, but because the training data did not include enough Black women in sufficient variety of lighting, angle, and expression. That is not a technical glitch. It is a civic failure. It is the same failure that once led to typhoid outbreaks in Chicago’s Back of the Yards: engineers designed water filters based on lab samples, not on the silt-heavy flow of the Chicago River at 3 a.m., when the pumps were running and the workers were asleep.
The Chinese side assumes that scale obviates the need for granular feedback. The American side assumes that diversity of input guarantees fairness in output. Both are wrong. Scale without feedback produces brittle systems that work everywhere until they fail - then fail spectacularly, and in predictable ways. Diversity without integration produces pilot projects that impress funders but vanish when real-world pressure is applied.
What is at stake is not dominance, but durability. The nation that treats AI as a public good - like clean water or public health - will build systems that last. The nation that treats it as a competitive advantage - like steel or semiconductors - will build systems that win battles but lose trust.
I visited a community tech co-op in Oakland last month where elders taught teenagers to audit local algorithms - how to trace the decision path of a loan-approval chatbot, how to map the spatial bias in predictive policing dashboards. The kids asked: Why doesn’t the city just give us the code? The answer, of course, is that the code is not the problem - the problem is who gets to read it, and who gets to change it. The settlement method does not demand open-source code, only open access. Not transparency for transparency’s sake, but transparency as a condition of civic participation.
The AI race will not be won by the fastest model, but by the most responsive one - the one that changes when the people it affects tell it to. Not because they are sentimental, but because they are right.
In Chicago, we used to say: The law is not what is written, but what is enforced. So too with AI: the algorithm is not what is coded, but what is applied. And application, like justice, must be local to be fair.