China and the US are engaged in a competitive AI race, with each currently leading in different aspects of artificial intelligence development. — China and the US are engaged in a competitive AI race, with each currently leading in different aspects of artificial intelligence development.

The principle operating here, stated plainly, is: A nation may pursue strategic dominance in artificial intelligence by any means necessary, prioritising relative advantage over shared norms of cooperation, transparency, or human flourishing. Let us ask whether this principle, universalised, produces coherence or contradiction.

If every rational state were to adopt this maxim - seeking AI supremacy not because it serves humanity, but because its rival does; not because it advances moral ends, but because it secures power - the result would not be a world of balanced progress, but a race without finish line, where the very technologies designed to liberate humanity from drudgery become instruments of mutual suspicion, surveillance, and control. For in such a world, the moment one state refrains from weaponising an AI capability, it invites exploitation; the moment one state discloses safety protocols, it surrenders advantage; the moment one state refuses to manipulate training data to serve narrow ends, it falls behind. The maxim, when generalised, collapses under its own weight: the pursuit of dominance becomes self-defeating, not because it fails, but because it renders cooperation - without which no AI system can be trusted, scaled, or held accountable - impossible. A world where every state treats others as mere instruments in its ascent is not a world of rational agents; it is a world of rivalrous automatons, each believing itself the exception to the rule it demands others obey.

This is not to deny the reality of competition, nor the prudence of safeguarding national interests. But prudence without principle is not wisdom - it is calculation masquerading as foresight. The United States and China, in their respective ascents, have each advanced under maxims that, while differing in tone, converge on this dangerous point: that AI is a domain where moral law yields to strategic necessity. One speaks of “free markets” and “democratic innovation,” the other of “social harmony” and “national rejuvenation” - but both treat AI as a zero-sum arena where the end of supremacy justifies means that, if universalised, would erode the very foundations of trust, openness, and accountability upon which any durable advance in artificial intelligence depends. For an AI system whose training data is secret, whose objectives are unverifiable, whose failures are concealed, cannot be trusted - not because it is flawed, but because its opacity renders it inaccessible to moral scrutiny. And an agent whose moral reasoning is inaccessible is not rational - it is inscrutable.

The Humanity Formula demands that we treat persons as ends in themselves, never merely as means. In this race, who is being instrumentalised? The researcher whose creativity is squeezed into narrow metrics of output and secrecy; the citizen whose data is harvested not for service, but for competitive advantage; the future generation whose cognitive landscape is shaped by algorithms whose values are unaccountable, whose biases are entrenched, whose errors cannot be corrected because no one dares admit them. Even the soldier who deploys autonomous systems is made a means - reduced to an operator in a chain of decisions whose moral weight has been outsourced to machines whose creators refuse to universalise their decision procedures. This is not innovation. This is the commodification of reason itself.

The Kingdom of Ends invites us to imagine a community of rational agents who legislate for themselves, each respecting the autonomy of the other. Would such a community permit a world in which AI development proceeds without shared ethical frameworks, without cross-border verification, without mechanisms for redress when systems fail? No - for in such a world, no agent can be certain that the laws governing their environment are just, or even intelligible. The very idea of self-legislation collapses when the rules are hidden, shifting, and unchallengeable. The duty that follows is clear: both nations must subordinate their competitive ambition to the requirement that AI systems be transparent in their design, accountable in their operation, and accessible to public scrutiny - not because such transparency is convenient, but because it is the condition of rational agency itself.

Consequences will not save this project. A world in which one side “wins” the AI race by outpacing the other in speed, scale, and secrecy may be rich in technology, but poor in freedom. For the absence of moral law does not produce efficiency - it produces fragility. When every state assumes the worst of the other, every advance becomes a weapon, every breakthrough a vulnerability to be exploited, and every failure a secret to be buried. The true danger is not that one side will surpass the other, but that both will abandon the idea that AI, like all human artefacts, must be subject to universal moral law before it is released into the world.

The duty, then, is not to cease competing - but to compete under rules that respect reason itself. To declare, as a first principle, that no state may develop AI in ways that, if universalised, would render moral deliberation impossible. To open the black box - not because it is easy, but because without it, there is no moral world left to inhabit. The race is not for dominance, but for the right to shape a future in which humanity, not its instruments, remains the end.