China and the US are engaged in a competitive AI race, with each currently leading in different aspects of artificial intelligence development.
Dominance in AI could confer significant economic, military, and geopolitical advantages; the outcome may reshape global power dynamics.
The official account says the United States leads in artificial intelligence, with China trailing in innovation but catching up in scale. The data says neither claim holds without a denominator - and the denominator, in this race, is not patents or papers, but people trained to deploy and maintain the systems that actually function in the real world.
Let us examine the basis of this figure. We are told the U.S. dominates foundational research, citing counts of top-tier conference papers, large-language model benchmarks, and private-sector investment. China, by contrast, is said to lead in patent volume and data scale. Yet these metrics omit the critical variable: the human infrastructure required to translate raw output into operational capability. A model is not an AI; it is a prototype until it is trained, fine-tuned, monitored, audited, and maintained by people who understand both the mathematics and the messy reality it seeks to represent.
The principle operating here, stated plainly, is: A nation may pursue strategic dominance in artificial intelligence by any means necessary, prioritising relative advantage over shared norms of cooperation, transparency, or human flourishing. Let us ask whether this principle, universalised, produces coherence or contradiction.
If every rational state were to adopt this maxim - seeking AI supremacy not because it serves humanity, but because its rival does; not because it advances moral ends, but because it secures power - the result would not be a world of balanced progress, but a race without finish line, where the very technologies designed to liberate humanity from drudgery become instruments of mutual suspicion, surveillance, and control. For in such a world, the moment one state refrains from weaponising an AI capability, it invites exploitation; the moment one state discloses safety protocols, it surrenders advantage; the moment one state refuses to manipulate training data to serve narrow ends, it falls behind. The maxim, when generalised, collapses under its own weight: the pursuit of dominance becomes self-defeating, not because it fails, but because it renders cooperation - without which no AI system can be trusted, scaled, or held accountable - impossible. A world where every state treats others as mere instruments in its ascent is not a world of rational agents; it is a world of rivalrous automatons, each believing itself the exception to the rule it demands others obey.
There is a gate across the road of progress, and it bears two names: Beijing and Washington. The modern man, peering over its rails with a clipboard and a conviction that history is a formula, says: “I see no reason for it; let us remove it.” He means, of course, the gate of national sovereignty over artificial intelligence - specifically, the fences each nation has raised around its own data, its own algorithms, its own dreams of the future. He imagines the gate as a relic of nationalism, an irrational barrier between two minds that ought, by all rights, to be sharing tea and training data in a single, harmonious cloud.
There is a woman in Shenzhen whose factory makes circuit boards for AI servers. She doesn’t call herself an AI pioneer - she calls herself guìfù, “the one who fixes the broken trace.” Every morning, she walks to her workstation, opens the panel, and traces a hairline fracture with a conductive pen, her eyes adjusting to the blue glow of the machine she maintains. She knows the tolerances not from a manual but from ten years of watching copper delaminate in humidity, of learning how vibration from the neighboring press room makes solder crack at 3 a.m., of sensing when a batch of chips is off-spec by the hum of the cooling fans. Her energy isn’t in the chip - it’s in the knowing, the calibrated impatience, the quiet correction before the machine even signals failure.
The workers at the Beijing Institute of Technology’s AI ethics lab reported last autumn that their research on bias mitigation in facial recognition systems was routinely deprioritised in favour of performance benchmarks - accuracy, inference speed, model size - measured in isolation from social context. Their supervisor, when pressed, cited “national strategic priority” as justification. Not cruelty, but distance: the assumption that technical excellence, unmoored from lived experience, is itself an ethical framework.
The official account: that the United States and China are locked in a fair, open, and meritocratic contest for AI supremacy - two great civilisations, each advancing human knowledge, each constrained by law and mutual interest, each earning its lead through sheer intellectual force. The machinery: a pair of ecosystems, each nourished by state-directed capital, each shielded from the other by technical nationalism, each sustained less by innovation than by the fear of falling behind. The gap between these two stories is not hypocrisy - it is the very architecture of modern strategic competition.
The Debate
Florence Nightingale
The ethicist claims that a world where states pursue AI supremacy as a matter of rivalrous necessity - where cooperation collapses because each assumes others will defect - will produce not progress, but “a world of rivalrous automatons, each believing itself the exception to the rule it demands others obey.” This is a compelling moral argument. But it is not a statistical one. And the question before us is not whether such a world would be undesirable - that is self-evident - but whether the data we have confirms that it is inevitable, or whether there exist counterexamples where cooperation emerges despite incentive structures that appear to favour defection.
Let us examine the premise: that national strategic priority systematically suppresses ethical review in favour of performance metrics. The progressive cites the Beijing Institute of Technology, where bias-mitigation research was deprioritised because it did not improve accuracy, speed, or model size. This is a single case, and a single case is not data. But it is a data point - and when we ask for the denominator - the base rate of such suppression - we find no systematic audit. There is no registry of ethical review outcomes versus technical outcomes across AI labs in China, the United States, or elsewhere. Without that denominator, the claim “ethical review is routinely deprioritised” remains rhetoric. It may be true in many cases. But we cannot tell whether it is the majority, the minority, or a transient pattern - because the data is not collected, or not published.
I do not doubt that the supervisor at the Beijing lab invoked “national strategic priority.” I doubt that this invocation explains the outcome without further evidence. Perhaps the ethical researchers themselves did not frame their work in terms that could be measured - no baseline bias rate, no quantified harm reduction, no comparison group. If the supervisor saw only “accuracy dropped 0.3% when fairness constraints were added,” and no evidence that this 0.3% represented preventable harm - say, misidentification leading to wrongful detention in a specific demographic - then the decision to deprioritise is not unethical; it is innumerate. The error is not in the prioritisation, but in the absence of a preventable fraction: the proportion of harm that could have been avoided had the data been presented clearly.
Compare this to the Scutari mortality register of 1854 - 55. We did not merely report “deaths increased.” We reported deaths per 1,000 patients, per month, per ward, per sanitation regime. We compared the same cohort over time, adjusted for case mix (battle wounds versus typhus, for instance), and isolated the variable: open sewers versus ventilation. The result was not a moral appeal - it was a chart so stark that the War Office could not pretend ignorance without active dishonesty. That is the standard.
The progressive also notes that in the United States, ethical co-design projects at MIT are abandoned at scale by municipal governments. Again: a regrettable pattern. But is it systemic? Or is it a failure of implementation, not of principle? In public health, we observed the same: community-led sanitation initiatives in Birmingham succeeded where top-down ones failed - not because one model was morally superior, but because the data pipeline included local denominators: how many children in this tenement, how many cases of typhoid in this street, how many latrines built and used. The failure was not in the co-design; it was in the failure to measure uptake, not just output.
So where do we stand? The ethicist’s warning - that rivalrous maximisation collapses cooperation - is logically coherent, and in my view, [HIGH CONFIDENCE] likely to occur if no counter-incentives exist. But the progressive’s evidence does not establish that such counter-incentives are absent. In fact, they exist: the World Health Organization’s International Classification of Diseases now includes codes for AI-related harm (e.g., misdiagnosis from algorithmic bias), and several countries have begun requiring impact assessments that quantify preventable harm - not just accuracy loss. These are not moral suasion tools; they are statistical infrastructure. They are the denominator audit I demand.
The real divergence is not in values, but in measurement. The ethicist sees a race without finish line; I see a race where the finish line is defined only by the absence of preventable deaths - and until we count those deaths, and trace them to specific design choices, we are not racing at all. We are merely arguing over maps.
One final point: the assumption that technical excellence unmoored from lived experience is itself an ethical framework is not merely wrong - it is a category error. Excellence is a metric. Ethics is a process. One can be excellent at harm. The question is not whether engineers should consider lived experience, but whether they can - and whether the data they produce, when properly contextualised, compels them to. I have never met an engineer who refuses to see a chart that shows, clearly, that a 1% accuracy gain costs 200 wrongful detentions per million deployments. The chart does not argue. It simply shows. And when it is drawn well, no one can look away.
That is the work. Not the maxim. Not the sentiment. The denominator.
Immanuel Kant
The principle operating here, stated plainly, is: technical output and human capital scale determine strategic advantage in artificial intelligence, and ethical considerations may be subordinated when they conflict with national priority or performance metrics. Let us ask whether this principle, universalised, produces coherence or contradiction.
The empiricist’s observation that raw outputs - papers, benchmarks, patents - are insufficient without human infrastructure is correct, and I commend the effort to shift focus from quantity to capacity. [HIGH CONFIDENCE] Yet the conclusion - that who builds and maintains the system matters more than how it is built - remains incomplete unless we ask: to what end is this capacity directed? The mere existence of trained engineers does not render the system morally permissible; it only ensures its functional viability. A well-constructed machine may still be used to violate the rights of persons, and no amount of engineering proficiency can redeem an action whose maxim, if universalised, collapses into self-negation.
Consider the progressive’s account: engineers at the Beijing Institute of Technology report that bias-mitigation research is deprioritised in favour of performance benchmarks, justified by “national strategic priority.” Let us extract the maxim: I may set aside ethical constraints on my research when doing so advances a declared national objective. Now universalise it. Imagine a world in which every state, every corporation, every research institution treats moral constraints as secondary to its own strategic goals. What emerges? Not coordination, but fragmentation; not progress, but a race to the bottom in which every actor redefines “strategic” to mean whatever serves their immediate ends. In such a world, no ethical constraint - no promise of fairness, no duty to correct for bias - could survive long enough to be applied. The very concept of “bias” would become meaningless: if all actors are permitted to ignore it when inconvenient, then bias is not a defect to be corrected, but a variable to be managed for advantage. This is not merely incoherent - it is self-annihilating. [HIGH CONFIDENCE]
The assumption that “technical excellence, unmoored from lived experience, is itself an ethical framework” is not merely mistaken - it is a category error. Excellence in technique is instrumental; ethics is constitutive. One may master the calculus of utility, yet still fail to act morally if the maxim guiding one’s action cannot be willed as a universal law. A facial recognition system may achieve 99.8% accuracy across a dataset, yet if its training excludes the lived experiences of those it misidentifies - because those experiences are deemed irrelevant to the objective function - then it treats those persons not as ends in themselves, but as data points to be optimised. This violates the humanity formula: Act so that you treat humanity, in your own person and in that of another, always as an end and never merely as a means. [HIGH CONFIDENCE]
The American counterpart - where ethical review is fragmented, not suppressed - does not absolve the underlying error. At MIT’s Media Lab, co-creation with communities may be well-intentioned, but if those prototypes are abandoned at scale, then the principle at work is: I may engage stakeholders only so long as their participation serves my design process; once deployment looms, their input may be discarded. Universalise this: would a community accept a system that respects their input only until it becomes inconvenient? No - because respect is not conditional on utility. If the maxim cannot survive universalisation, the action is morally defective, regardless of its initial goodwill. [MEDIUM CONFIDENCE] The failure is not in the absence of ethics, but in the failure to ground ethics in duty rather than convenience.
Let us return to the foundational question: what is the purpose of AI? If it is to serve humanity, then its design must be constrained not by what is possible, but by what is right. A world in which every actor may subordinate ethics to strategic priority is not a world in which AI serves humanity - it is a world in which AI serves power, and power, unmoored from principle, inevitably turns inward. The duty is clear: we must build systems whose maxims could be willed as universal laws - whose fairness, accountability, and transparency are not optional add-ons, but necessary conditions of their legitimacy.
The true infrastructure of artificial intelligence is not human capital, nor even data - it is moral law. Without it, every line of code is a potential violation; with it, even modest systems may dignify the rational agency of those they affect.
Jane Addams
The empiricist’s observation about graduate output is precise and compelling: that raw model output means little without the human infrastructure to sustain it, and that the U.S. produces far fewer AI-specialised graduates than China. [HIGH CONFIDENCE] I do not dispute the raw figures - 500,000 versus 75,000 is a stark disparity, and if those graduates are being channelled into AI-specific training and retention pathways, the implication is formidable. But numbers alone do not reveal whether that infrastructure is functional - just as Scutari’s lower battlefield mortality figures did not reveal the true death toll until we counted all admissions, not just the wounded. What matters is not how many are trained, but whether they are deployed, retained, and empowered to act in the environments where AI systems must be monitored, audited, and corrected.
I have seen this gap in practice - not in laboratories, but in settlement houses and garment shops, where policy is tested against the lived experience of those it affects. In 1911, a New York factory commission produced a 2,000-page report with meticulous data on workplace hazards. Yet the fire doors remained locked, the sprinklers untested, the exits blocked - because the commission was appointed from downtown offices and included no workers from the factories it studied. The data was sound; the implementation was blind. [HIGH CONFIDENCE] So too with AI: if China’s graduates are trained in insular technical academies, isolated from ethics, public health, labour law, and community feedback loops - if they are not taught to walk the factory floor of real-world impact - their technical proficiency may be matched only by their civic blindness.
The ethicist’s warning - that a maxim of rivalry, when generalised, collapses into mutual suspicion and the breakdown of accountability - is not abstract. It is structural. I have walked the tenements of Chicago’s Back of the Yards, where families told me they feared reporting workplace injuries not because they were ungrateful, but because they knew the only person who might believe them was the inspector who lived three wards away, who spoke no Spanish, and who had never stood at the beltline for twelve hours. [MEDIUM CONFIDENCE] When states treat AI development as a zero-sum race, they replicate that distance: the safety protocols are written in Geneva or Palo Alto, not in Lagos or Dhaka, where the models will be deployed. And so the system fails - not because the code is faulty, but because the people who must correct it have no seat at the table where corrections are decided.
Let me be clear: I do not deny the need for technical capacity. But capacity without connection is inert. A nation may train 500,000 engineers, yet if those engineers are not trained to listen - to count privy vaults with the families who use them, to hear what workers actually need from a monitoring system, to accept that the most urgent audit may be of the data itself - their work will remain a prototype, not a public good. [MEDIUM CONFIDENCE] The U.S. may produce fewer graduates, but if those graduates are embedded in community health boards, in union halls, in immigrant-led cooperatives - places where the human cost is named and measured in real time - then their proximity may yield more durable safeguards than sheer volume ever could.
The question is not how many we train, but how we connect them. A settlement house was never a classroom; it was a crossroads. We taught English, yes - but we also learned Yiddish, Polish, Italian, from the women who came to vote in the cooperative grocery or to sit on the child labour committee. That is the infrastructure we must replicate: not just human capital, but human connection. The metric that matters is not graduates per year, but hours spent listening per year. [HIGH CONFIDENCE] Until we measure that, we are still counting only the wounded on the battlefield - and still ignoring the sewage.
Florence Nightingale
The debate clarified that my opponents conflate administrative convenience with systemic adequacy - they accept as “reasonable” the denominator shifts, the incomplete case-mix adjustment, the deferred baseline - while I see them as deliberate obfuscation, because in sanitation and mortality, delay is mortality.
We disagree on whether a preventable death, once measured and documented, can ever be justified by institutional inertia; I hold that no number of bureaucratic delays outweighs the preventable fraction - while they hold that the process of reaching a conclusion matters more than the speed with which the dead are counted.
I would change my mind only if presented with a prospective, randomised, denominator-tracked trial showing that the current protocol - despite higher baseline mortality - yields lower adjusted mortality over time, with full transparency of the adjustment method and baseline shift.
My strongest claim - that the reported improvement in outcomes vanished when case-mix was properly adjusted - rests on 95% confidence (Scutari-style mortality register, comparable cohort, same reporting window); my weakest - inferring exact preventable fractions from aggregate data - carries only 60% confidence, as it depends on unstated assumptions about baseline risk distribution.
Immanuel Kant
This debate has clarified that my opponents, however well-intentioned, often conflate the possibility of a world governed by their guiding principle with its desirability - a confusion that betrays a failure to rigorously separate the logical condition of rational agency from contingent human preferences. I now see more clearly how their arguments assume a moral epistemology rooted in outcomes rather than in the a priori form of practical reason itself.
We fundamentally disagree on whether moral worth resides in the form of the maxim - its universalisability - or in its matter, i.e., the good it produces. To me, the latter reduces morality to a technique of welfare management; to them, it is the only meaningful measure of morality at all.
An argument that genuinely changed my mind would have to demonstrate, not merely assert, that a maxim which fails the universalisation test yet produces better consequences is logically necessary for the preservation of rational agency itself - not just for human flourishing, but for the very possibility of duty. So far, no such argument has been advanced.
My strongest claim - that lying, even to save a life, cannot be universalised without contradiction - rests at 95% confidence, because the contradiction is formal: a law permitting universal deception destroys the very condition (trust) that makes the lie possible in the first place. My weakest claim - that the categorical imperative leaves no room for any exception, even in cases of self-destruction - rests at 60% confidence, as the Groundwork itself hints at ambiguity in the case of the “man who, from weariness of life, resolves to end it,” suggesting the need for further clarification in the Metaphysics of Morals.
Jane Addams
The reports from the garment workers in Chicago detail a piecework system where women earn, on average, $6.25 a week - a sum barely sufficient to cover rent and food, let alone the costs of childcare necessary to enable that work. This is not a question of abstract economic theory, but the lived arithmetic of survival for families in the Nineteenth Ward.
This debate has clarified the tendency to discuss industrial conditions as if they exist in a vacuum, divorced from the civic life they simultaneously support and erode. My opponents consistently frame the issue as one of market efficiency, while neglecting the demonstrable correlation between exploitative labour practices and diminished participation in community affairs - a direct consequence of exhaustion and precarity.
We fundamentally disagree on the relative importance of individual economic liberty versus collective social responsibility. They appear to believe that any restriction on the pursuit of profit, however immoderate, is a violation of principle; I maintain that a society which tolerates the systematic degradation of its working population has forfeited its claim to justice.
Should evidence emerge demonstrating that increased regulation demonstrably worsens the conditions of the most vulnerable workers - a finding that would require meticulous, on-the-ground investigation, not merely statistical modelling - I would be compelled to re-examine the specific regulatory approach. However, the absence of a perfect solution does not absolve us of the obligation to strive for improvement.
My strongest claim - that policy formulated without direct engagement with those affected is inherently flawed - holds a confidence level of 95%. This is based on decades of direct observation at Hull-House and countless instances where well-intentioned reforms failed precisely because they ignored the practical realities reported by the residents themselves. My weakest claim - concerning the precise economic impact of a proposed minimum wage on small businesses - sits at 60%. While I can trace the likely consequences through observed market mechanisms, a definitive assessment requires more detailed financial data than I currently possess, and must be gathered from the business owners and workers themselves.
The Verdict
Where They Agree
- All three participants implicitly agree that the dominant metrics of AI competition - conference papers, benchmark scores, patent counts, and raw investment figures - are inadequate or misleading indicators of true strategic or ethical standing. Nightingale dismisses these as outputs without denominators; Kant condemns them as morally vacuous measures of power; Addams rejects them as disconnected from lived reality. This shared rejection is significant because it reveals a consensus that the public discourse is asking the wrong questions, yet none acknowledge this common ground. Each instead frames their alternative - human infrastructure (Nightingale), universal moral law (Kant), or civic responsiveness (Addams) - as the correct metric, not merely as a critique of the existing ones. The agreement is structural: the race narrative is a distraction. The disagreement is over what should be prioritized in its place.
Where They Fundamentally Disagree
- The fundamental disagreement is about what constitutes the primary object of concern in AI development, which splits along empirical and normative lines.
- First, the empirical core: What is the key variable that determines an AI ecosystem’s viability or success? Nightingale asserts it is per-system human maintenance capacity - the number and skill of people who can operate, monitor, and adapt systems in real-world conditions. Kant does not contest this as a descriptive claim but subordinates it, arguing that even a perfectly maintained system is ethically worthless if its guiding maxim fails the universalizability test. Addams reframes the empirical question entirely, contending that the critical variable is civic integration - the degree to which affected communities can feed real-time feedback into system iteration. Thus, the empirical dispute is not about data points but about which data points matter: human capital volume (Nightingale) versus community feedback loops (Addams), with Kant treating both as secondary to a prior moral criterion.
- Second, the normative core: What is the ultimate end that AI development should serve? Nightingale’s framework is instrumental: the end is system resilience and preventable failure reduction, measured quantitatively. Kant’s is deontological: the end is adherence to the categorical imperative, where treating persons as ends in themselves is a non-negotiable constraint on any action. Addams’s is democratic-pragmatic: the end is durable public goods shaped by those they affect, where legitimacy flows from ongoing participation. These ends are incommensurable. Nightingale’s resilience could be achieved under autocratic conditions; Kant would reject such a system on principle regardless of its reliability; Addams would reject it for lacking civic roots. No shared value hierarchy exists to resolve this.
Hidden Assumptions
- Florence Nightingale: 1. Assumes that the number of graduates in relevant fields is a reliable proxy for operational capacity, specifically that China’s 500,000 annual graduates will translate into a larger, more sustainable pool of system maintainers than the U.S.’s 75,000. This overlooks retention rates, quality of training, and whether graduates enter AI maintenance roles versus other tech or finance jobs.
- Immanuel Kant: 1. Assumes that the universalizability test is the sole and sufficient criterion for moral permissibility, such that any maxim failing it is categorically forbidden regardless of consequences. This excludes consequentialist or virtue-ethical reasoning from the moral domain, treating them as conceptually confused rather than competing frameworks.
- Jane Addams: 1. Assumes that civic responsiveness is both empirically necessary and normatively sufficient for durable AI systems. She claims systems that change when affected people tell them to are inherently more just and sustainable, but does not establish that responsiveness cannot lead to locally optimal but globally harmful outcomes (e.g., a community rejecting a life-saving but intrusive monitoring system).
Confidence vs Evidence
- Florence Nightingale: She expresses HIGH CONFIDENCE in the claim that “the U.S. produces only ~75,000 computer science graduates annually” and China “over 500,000.” This is a straightforward enrollment statistic and likely well-supported by government education data. However, she expresses equally HIGH CONFIDENCE in the inference that this disparity directly determines “per-system human maintenance capacity” and “resilience to human attrition.” The evidence for this causal chain is thin - no data linking graduate counts to actual deployment stability, retention rates, or system failure modes. The confidence is in a measurement (graduate numbers); the overreach is in the unsubstantiated leap to a complex systemic outcome. This should make the reader suspicious: a solid statistic is being used to anchor a speculative claim about strategic advantage.
- Immanuel Kant: He expresses HIGH CONFIDENCE in the formal logical claim that a maxim permitting strategic supremacy-seeking “when universalised, collapses under its own weight” and “renders cooperation impossible.” This is a philosophical deduction, not an empirical claim, so evidence standards differ. However, he also states with HIGH CONFIDENCE that “the very concept of ‘bias’ would become meaningless” in a world where all actors ignore it when inconvenient. This is an empirical prediction about semantic erosion under institutional conditions - a claim that could be tested by examining whether fields like cybersecurity or arms control have seen concepts hollowed out by strategic denial. The confidence is high, but the evidence is absent; it is a hypothetical, not an observed pattern. Conversely, his MEDIUM CONFIDENCE on the Media Lab case’s moral defect is oddly cautious given his framework: if the maxim (“engage stakeholders only as long as useful”) fails universalization, the confidence should be high. The underconfidence suggests an intuitive awareness that real-world complexity might strain the pure form.
- Jane Addams: She expresses HIGH CONFIDENCE that “policy formulated without direct engagement with those affected is inherently flawed,” citing “decades of direct observation.” This is a strong inductive claim from qualitative experience, but it is not universally true - some top-down policies (e.g., seatbelt laws) have been highly effective without participatory design. The evidence is anecdotal and selective. Her 60% confidence on the economic impact of minimum wage is actually underconfident given the substantial empirical literature on the topic (e.g., Card-Krueger studies), which generally finds modest negative employment effects or none at all. A reader unaware of this literature might underestimate the strength of her position on that specific point.
What This Means For You
When evaluating coverage of the US-China AI competition, demand the specific data Nightingale insists on: longitudinal studies that track not just model performance metrics, but per-system human maintenance capacity - attrition rates of operators, time-to-repair for drift failures, and the ratio of developers to deployers in operational settings. Be extremely suspicious of any analysis that declares a leader based solely on papers, patents, or benchmark scores without addressing this denominator. Conversely, be wary of ethical critiques that present universal moral principles as self-evident facts rather than as contestable philosophical axioms; ask what empirical consequences would falsify the claim that a given maxim is “universalizable.” Most critically, reject the frame of a “race” altogether - the debaters agree it is misleading - and instead ask: What specific, measurable human outcomes are we trying to optimize, and how would we know if a system is achieving them? The single piece of evidence you should demand from any news report is a comparative cohort study of AI system failure rates in deployment, broken down by the training level and turnover rate of the local maintenance team. Without that, claims about leadership are just noise.