EU AI Act enforcement begins — first compliance deadlines hit European tech companies

The claim rests on the measurement of “compliance” with the EU AI Act. Let us first verify whether this measurement captures what it purports to capture. Compliance is not a physical quantity like length or weight, but an abstract state determined by regulatory interpretation applied to complex technological systems. The measurement instrument here is not a calibrated device but a legal framework, and its precision depends entirely on the clarity of its operational definitions.

European regulators announce that enforcement has begun, but what precisely is being measured? Is it the submission of required documentation? The implementation of technical safeguards? The absence of prohibited practices? Each represents a different measurement with different reliability characteristics. Without specifying which aspect of compliance is being assessed, we cannot evaluate the accuracy of any reported figure.

Let us examine how this figure was assembled. The EU AI Act classifies AI systems into risk categories, each with distinct compliance requirements. A “high-risk” system requires different documentation than a “limited risk” one, and an “unacceptable risk” system faces prohibition entirely. If the compliance measurement aggregates all these categories without proper weighting, the resulting percentage becomes meaningless - comparing apples to oranges under the guise of quantitative precision.

Where in the chain does imprecision enter? At the very definition of the target. A company may submit a compliance document that technically meets legal requirements but fails to address the spirit of the regulation. A self-assessment form may be checked by regulators with varying degrees of technical expertise. The measurement instrument itself - the regulatory process - contains built-in imprecisions that compound through subsequent operations.

The measurement also suffers from temporal ambiguity. Compliance is not a binary state achieved at a single moment but an ongoing process. A company may comply today but introduce non-compliant features tomorrow. The reported compliance rate represents a snapshot in time, not a stable characteristic of the system being measured.

the measurement lacks independent verification. The EU relies on self-reporting by tech companies, creating a classic black box where the producer of the measurement also controls the instrument. Without third-party audits or standardized testing protocols, we cannot distinguish between genuine compliance and superficial window dressing designed to pass inspection.

The error propagation here is particularly dangerous. A single flawed compliance assessment can lead to a cascade of incorrect policy decisions, market distortions, and competitive disadvantages based on false premises. The precision of the reported compliance figure obscures the fundamental uncertainty of the measurement process.

Can this be independently confirmed? Not with the current methodology. Until regulators implement standardized testing protocols, transparent scoring rubrics, and independent verification mechanisms, any compliance percentage reported remains an anecdote rather than evidence. The machinery of enforcement remains unseen, and without seeing the gears, we cannot trust the motion.