EU AI Act enforcement begins — first compliance deadlines hit European tech companies

In a glass-walled office in Dublin, a compliance officer named Orla scans the EU AI Act’s first enforcement deadline on her screen. She has spent the last eighteen months preparing, poring over model documentation, rewriting impact assessments, and holding workshops where she explained to engineers why “transparency” isn’t just a checkbox but a living contract with the public. Now the clock has run out, and she is staring at a red banner that reads NON-COMPLIANCE - NOTIFIED. The notice is addressed not to her company but to a small German firm building generative tools for rural clinics. The irony is not lost on her: the firm is based two hours from where she grew up, in a town where the hospital still uses paper charts. The notice cites Article 13, the risk-management clause, and reminds them that failure to document “human oversight mechanisms” can trigger fines of up to four percent of global turnover. Orla sighs, closes her laptop, and walks to the window. Below, the Liffey is high after weeks of rain, and the neon reflections on the water look exactly like the error messages she’s spent her career trying to prevent.

What Orla is witnessing is not merely a regulatory milestone. It is the first public stumble of a law that presumes to govern the future of intelligence while remaining largely silent on the intelligence of governance. The EU AI Act is the most ambitious attempt in history to impose democratic accountability on machine learning systems. Yet early enforcement reveals a structural blind spot: the law’s architects assumed that the people most affected by AI - frontline workers, patients, students, small business owners - would be consulted in the design of compliance. They were wrong. Instead, the burden of proof has landed on the very firms least equipped to collect it - start-ups, academic spin-offs, nonprofits - while the institutions that commissioned the systems in the first place remain comfortably out of frame.

In Galway, a district nurse named Siobhán O’Sullivan spends her evenings filling out forms for a new AI triage tool her hospital adopted last quarter. The system is supposed to reduce waiting times by predicting which patients need urgent care. On paper, it sounds reasonable. In practice, it has added twenty minutes to every assessment. The machine flags elderly patients with chronic conditions as low priority, not because their symptoms are stable, but because their charts lack the “structured data” the algorithm was trained on. Siobhán has tried to explain this in the mandatory feedback portal, but the portal only accepts numerical ratings and a single-line comment. She left a three-paragraph description of a 78-year-old farmer whose blood pressure spiked after the system downgraded him to “routine.” The response was automated: “Thank you for your feedback. Your submission has been logged.” The system does not know what a farmer’s blood pressure tells us about the limits of structured data. Neither, it appears, does the law.

The distance between the Act’s enforcement edifice and the lived reality of Siobhán’s ward is not accidental. It is structural. The law assumes that “high-risk” AI systems will be built by large corporations with deep compliance departments. It assumes that those corporations will consult affected communities. It assumes that the feedback loop closes at the point of deployment. None of these assumptions hold true for the majority of AI systems now subject to the Act. In Berlin, a collective of refugee women running a language-app startup told regulators they needed a human-in-the-loop model because their students - many of them illiterate in their own languages - relied on tone and gesture to understand meaning. The regulator replied that the Act does not require human oversight for language-learning tools unless they are used in educational institutions. The women pointed out that their students are exactly the kind of people who will never set foot in a school again. The regulator did not reply.

This is the comedy of the moment: a law that believes itself rigorous is performing its rigor on a stage that excludes the actors most harmed by the play. The enforcement deadlines arrived before the field offices did. The fines will fall on the smallest players while the architects of the systems - public agencies, multinational tech firms, venture capitalists - remain comfortably distant from liability. The Act is a magnificent telescope trained on the wrong galaxy.

What this means for you is simple. When you read the next compliance announcement, look for the names of the communities that were supposed to be consulted. If they are not there, the law is not regulating AI; it is regulating paperwork. And paperwork, as any nurse or farmer or refugee language teacher will tell you, is just another kind of distance.