EU AI Act enforcement begins — first compliance deadlines hit European tech companies

One notes, in the announcement of this new regulatory epoch, a peculiar omission: a definition of intelligence. The Artificial Intelligence Act, a document of considerable heft, establishes categories of risk, outlines obligations, and specifies deadlines with the meticulousness one expects from a continental bureaucracy. Yet, on the matter of the phenomenon it seeks to regulate, it maintains a strategic vagueness, as if legislating the behaviour of ghosts. It is not that the definition is incorrect; it is that the definition is a list of exclusions, a perimeter fence around a vacancy. One is reminded of zoological classifications for creatures known only by a single, disputed footprint.

A naturalist, observing the behaviour of the corporate organism under this new pressure, would note the immediate adaptation. The press releases from various technology firms are, to the untrained eye, a chorus of compliance and commitment. They speak of “responsibility,” “ethical frameworks,” and “alignment.” This is the courtship display. The filings, however - those documents submitted to regulators and investors, which carry legal weight - tell a different, quieter story. They speak of “compliance costs,” “strategic reprioritisation,” and “potential impacts on innovation velocity.” The press release addresses the public; the filing addresses the ledger. Both are true, in their way, but they describe different realities, like a map of a city that shows only the parks and omits the streets.

The anomaly, then, is not the Act itself, nor the corporate response. It is the seamless coexistence of these two narratives, the official certainty and the operational ambiguity. One might catalogue a series of such moments. The expert panel that unanimously endorsed a compliance framework, yet whose meeting minutes reveal three members dissenting on the grounds of “unworkable definitions.” The impact assessment that predicts minimal economic disruption, but whose annexe contains a single sentence noting a “non-linear risk model” whose outputs were deemed “not suitable for public summary.” The regulator’s public timeline, which shows a smooth glide path to implementation, contrasted with the internal memo, obtained through access-to-information requests, that describes the compliance software as “persistently generating errors when applied to systems not conceived within the EU regulatory paradigm.” These are not contradictions, in the institutional sense. They are simply different layers of documentation, each serving a distinct biological function for the organism that produced it.

What is one to make of this procession of damned data? The cosmic hypothesis, offered not as truth but as a lens no more absurd than the official clarity, is that the AI Act is not primarily about artificial intelligence. It is about jurisdiction. It is a territorial marker, a complex scent left by one bureaucratic species to warn off others. The chatter about “high-risk systems” and “prohibited practices” is the visible plumage; the substance is the assertion that within a certain geographical area, a certain set of rules will prevail. The technology companies, in turn, are not responding to a regulatory philosophy so much as they are navigating a new geological feature on their operational map. Their compliance is a form of migration.

A field biologist would conclude that the spectacle is not one of control, but of co-evolution. The regulator evolves more complex rules; the regulated evolve more sophisticated methods of presenting compliance. The true outcome of the AI Act may have little to do with the intelligence of machines and everything to do with the resilience of narratives. The data that will be damned will be the data that measures the gap between the cost of compliance and the prevention of harm. That figure will be exceptionally difficult to locate in any public report. It will be filed under “other.”

One is left with a simple observation. The great promise was that we would be governed by laws, not by men. We have achieved something both more complex and more mundane: we are governed by laws, and by the gap between those laws and the filings that answer them. The anomaly is not in the gap itself, but in our continued surprise at its persistence. The record shows it has always been there.