The European Union’s AI Act: It’s All in the Implementation

Having received final approval from the European Union’s Council of Ministers on May 21, the law is at last a reality.

June 13, 2024
AIACTEU
The EU flag and Meta logo are depicted here. The EU AI Act is now law. But much work remains to be done, the authors note. (Dado Ruvic/REUTERS)

Having received final approval from the European Union’s Council of Ministers on May 21, the Artificial Intelligence (AI) Act is at last a reality. But the work is far from over. Indeed, in some respects, it’s just beginning. The devil is in the details in the form of technical standards, guidelines and “codes of practice” yet to be developed.

And while the critical spotlight over the past few years has been on the European Union’s major institutions — the Commission, the Parliament and the Council — in future, it will be on standards bodies such as CEN-CENELEC (the European Committee for Standardization and the European Committee for Electrotechnical Standardization) and less visible EU public institutions, such as the newly created AI Office, the European Centre for Algorithmic Transparency and the EU Data Protection Board.

Moreover, not all of the act’s provisions are to become binding right away, providing standards bodies and institutions time to do their work. But this will also prolong uncertainty for companies and civil society.

Key areas yet to be roughed in include guidelines for the developers of “general-purpose AI models” that pose “systemic risk,” for example, with respect to risk management, data governance, technical documentation and human oversight. These guidelines will need to specify measures to assess and report accuracy, robustness and cybersecurity. Translating the act’s provisions into actionable practices, benchmarks and measurements will be far from straightforward.

Also on the to-do list are codes of practice, which will set rules for providers of general-purpose AI models deemed to pose “systemic risk.” That includes requirements for up-to-date technical documentation and transparency on how AI systems are trained and how they work. These rules are to be drawn up in collaboration with AI developers as well as stakeholders from academia and civil society.

All that makes these codes of practice more than simple text documents; they will also function as governance instruments. Because companies are to decide if they want to be part of the process, and because they are being allowed to help define the rules, these codes of practice can be regarded as emerging forms of co-regulation.

This is not the first time the European Union has resorted to this novel means of regulatory governance. In 2018, the Commission, together with large digital platform providers, drew up a code of practice to deal with disinformation.

Despite some challenges — the code had to be revised in 2022, and in 2023, X (formerly Twitter) left the group of signatories — this form of governance has the virtue of flexibility. Just last year, for example, signatories to the code of practice on disinformation were encouraged to develop ways to tackle the challenges posed by generative AI.

Fast-paced technological developments, mainly driven by the private sector, often leave regulators at a disadvantage, struggling to keep pace and heavily reliant on information provided by the developers themselves. This makes forms of co- and self-regulation a necessary addition to hard regulation. The question is whether this approach can be an effective check on a moving target such as AI risk in the long term.

Since the AI Act is not a standalone regulatory project, but part of the European Union’s existing digital regulatory framework, some emerging concerns, such as generative AI’s impact on the European elections that took place June 6–9, could be addressed using existing regulation, including the Digital Services Act (DSA) guidelines to mitigate election risk. However, these rules only apply to the platforms and search engines covered by the DSA, letting major AI players such as OpenAI and Anthropic off the hook, for the time being.

Another co-regulatory initiative aiming to close this gap is the AI Pact, a voluntary program encouraging companies to share their processes and measures to ensure early compliance with the AI Act. The pact could foster learning and speed up the implementation of the act well before its last provisions enter into force a few years down the line. The need for this initiative again underlines how challenging it is to regulate rapidly emerging technologies.

Although the adoption of the AI Act is a welcome and important step toward governance of this technology, the matter is anything but settled. A great deal of work remains to be done before the act comes fully into force. And political developments spurred by general-purpose AI models have demonstrated how much can happen in a relatively short time.

European standards-setters and the AI Office — itself facing big challenges as a newly created institution at the intersection of EU institutions, national governments and international AI governance — have their work cut out for them.

The implementation phase will be crucial in determining how effective the AI Act will be at reaping the benefits of this technology while mitigating risks. In the current context, bringing together all relevant stakeholders will be challenging. But it must be done — the international public good depends on it.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Authors

Nicole Lemke is a public policy researcher based in Lausanne, Switzerland, specializing in the governance of emerging technologies, with a focus on artificial intelligence (AI).

David Evan Harris is a CIGI senior fellow, Chancellor’s Public Scholar at UC Berkeley and faculty member at the Haas School of Business.