The full and final text of the EU AI Act, the European Union’s landmark risk-based regulation for applications of artificial intelligence, has been published in the bloc’s Official Journal.
In 20 days’ time, on August 1, the new law will come into force and in 24 months — so by mid-2026 — its provisions will generally be fully applicable on AI developers. However, the law takes a phased approach to implementing the EU’s AI rulebook, which means there are various deadlines of note between now and then — and some even later still — as different legal provisions will start to apply.
EU lawmakers clinched a political agreement on the bloc’s first comprehensive rulebook for AI in December last year.
The framework puts different obligations on AI developers, depending on use cases and perceived risk. The bulk of AI uses will not be regulated as they are considered low risk but a small number of potential use cases for AI are banned under the law.
So called “high risk” use cases — such as biometric uses of AI, or AI used in law enforcement, employment, education and critical infrastructure — are allowed under the law but developers of such apps face obligations in areas like data quality and anti-bias.
A third risk tier also applies some lighter transparency requirements for makers of tools like AI chatbots.
For makers of general purpose AI (GPAI) models, such as OpenAI’s GPT, the technology underlying ChatGPT, there are also some transparency requirements. The most powerful GPAIs, generally set based on compute threshold, can be required to carry out systemic risk assessment too.
Heavy lobbying by some elements of the AI industry backed by a handful of Member States’ governments sought to water down obligations on GPAIs over concerns the law could hold back Europe’s ability to produce homegrown AI giants to compete with rivals in the U.S. and China.
Phased implementation
First up, the list of prohibited uses of AI will apply six months after the law comes into force — so in early 2025.
Banned (or “unacceptable risk”) use cases for AI that will soon be illegal include China-style social credit scoring; compiling facial recognition databases by untargeted scraping of the internet or CCTV; and the use of real-time remote biometrics by law enforcement in public places unless one of several exceptions apply, such as during a search for missing or abducted persons.
Next, nine months after entry into force — so around April 2025 — codes of practice will apply on developers of in-scope AI apps.
The EU’s AI Office, an ecosystem-building and oversight body established by the law, is responsible for providing these codes. But who will actually write the guidelines is still raising questions.
According to a Euractiv report earlier this month, the EU has been looking for consultancy firms to draft the codes, triggering concerns from civil society that AI industry players will be able to influence the shape of the rules that will be applied to them. More recently, MLex reported that the AI Office will launch a call for expression of interest to select stakeholders to draft the codes of practice for general purpose AI models following pressure from MEPs to make the process inclusive.
Another key deadline falls 12 months after entry into force — or August 1, 2025 — when the law’s rules on GPAIs that must comply with transparency requirements will start to apply.
A subset of high-risk AI systems have been given the most generous compliance deadline, with 36 months after entry into force — until 2027 — allowed for them to meet their obligations. Other high-risk systems must comply sooner, after 24 months.