If your company operates in the EU and uses artificial intelligence in any capacity, this article is for you. Not for later. For right now.
Regulation (EU) 2024/1689, widely known as the EU AI Act, entered into force on 1 August 2024. Its obligations are rolling out in four phases. The first already passed in February 2025. The next major date is August 2026, when obligations for high-risk AI systems kick in — the ones that affect most businesses.
At Delbion we have spent months helping companies prepare for this. And the same story keeps coming up: confusion about what to do, when to do it, and what happens if you don't.
So here is the definitive guide. This is the complete EU AI Act timeline, broken down phase by phase, with practical explanations for any business operating in Europe. No unnecessary jargon. No hedging.
What is the EU AI Act
The EU AI Act is the world's first comprehensive regulatory framework for artificial intelligence. It was approved by the European Parliament in March 2024 and published in the Official Journal of the EU on 12 July 2024 as Regulation (EU) 2024/1689.
Unlike the GDPR, which governs personal data, the AI Act governs artificial intelligence systems based on the risk they pose to people. The logic is simple: the higher the risk, the heavier the obligations on those who develop, deploy or use the system.
Here is the part most companies miss: the AI Act does not apply only to tech companies. It applies to any organisation that uses, develops or distributes AI systems in the European Union. If your company uses a customer service chatbot, a credit scoring system, or an AI tool to screen job applications, you are within scope.
The regulation is directly applicable. No national transposition required. Obligations are identical in Spain, Germany or France. Each member state must designate a national supervisory authority. In Spain, that authority is AESIA (Agencia Española de Supervisión de la Inteligencia Artificial), created by Royal Decree in August 2023 and already operational.
The 4-phase timeline
The AI Act does not land all at once. It rolls out across four staggered phases over three years, from February 2025 to August 2027. Each phase adds new obligations.
Here is the overview:
2 February 2025
AI literacy obligation (Art. 4) and awareness of prohibited practices (Art. 5). Already in force.
2 August 2025
Effective prohibition of unacceptable-risk AI systems. Rules for general-purpose AI (GPAI) models.
2 August 2026
Full obligations for high-risk AI systems: conformity assessment, CE marking, EU database registration, human oversight.
2 August 2027
Remaining obligations, including AI embedded in Annex I regulated products (machinery, medical devices, toys, etc.).
Let's go through each phase in detail.
Phase 1 (2 February 2025): what you must already comply with
This phase has already passed. If you haven't addressed it, you are already in regulatory arrears.
Article 4: AI Literacy. All organisations operating AI systems must ensure their staff have a sufficient level of AI literacy. This is not optional. Article 4 requires providers and deployers of AI systems to take measures ensuring that the people working with these systems understand how they work, their limitations, and their risks.
In practice, this means training. Documented training, tailored to each person's role. Sending a PDF does not count. You need to be able to demonstrate that your teams have received appropriate instruction. We built a dedicated EU AI Act compliance course specifically to cover this obligation.
Article 5: Awareness of prohibited practices. Since February 2025, organisations must also be aware of which AI systems are banned under the regulation. Article 5 lists practices considered unacceptable risk. While the outright prohibition takes effect in August 2025 (Phase 2), since February organisations must already be conscious of these restrictions and start evaluating whether any of their systems could fall into that category.
If you haven't addressed this phase yet, read our article on mandatory AI training under Article 4 and act now.
Phase 2 (2 August 2025): prohibited practices and GPAI rules
From 2 August 2025, AI systems classified as unacceptable risk are prohibited across the EU. Full stop. No meaningful exceptions for the private sector.
What is prohibited exactly?
- Subliminal or deceptive manipulation: AI systems designed to manipulate people's behaviour in ways that cause or are likely to cause physical or psychological harm.
- Exploitation of vulnerabilities: AI that exploits age, disability or socioeconomic situation to distort behaviour.
- Social scoring by public authorities: systems that classify citizens based on social behaviour (China-style social credit).
- Real-time remote biometric identification in public spaces for law enforcement purposes (with very limited exceptions).
- Predictive policing based solely on personality traits or profiling.
- Bulk scraping of facial images from the internet or CCTV to create facial recognition databases.
- Emotion inference in workplaces or educational institutions (except for medical or safety reasons).
- Biometric categorisation based on race, political opinions, sexual orientation or other sensitive attributes.
Also in August 2025, rules for General-Purpose AI (GPAI) models — such as GPT, Gemini, Claude or Llama — come into force. Providers of these models must comply with transparency, technical documentation and copyright obligations. If a GPAI model poses systemic risk (for instance, due to training compute), obligations become significantly stricter.
For most companies that are users of these models rather than developers, Phase 2 means reviewing that none of your AI applications fall into the prohibited list. And if you work with GPAI model providers, verifying that they are meeting their own obligations.
Phase 3 (2 August 2026): the main deadline
This is where the regulation hits businesses hardest. And where most companies are not ready.
On 2 August 2026, the full obligations for high-risk AI systems come into force. This includes:
- Risk management system (Art. 9): a continuous process of identifying, analysing and mitigating risks throughout the AI system's entire lifecycle.
- Data governance (Art. 10): ensuring that training, validation and test data are relevant, representative, complete and as free from errors as possible.
- Technical documentation (Art. 11): detailed documentation demonstrating the system's conformity with the regulation's requirements.
- Automatic event logging (Art. 12): systems must automatically log relevant events during operation, with complete traceability.
- Transparency and user information (Art. 13): clear instructions for operators, including known limitations and risks.
- Human oversight (Art. 14): high-risk systems must be designed so that humans can effectively oversee them.
- Accuracy, robustness and cybersecurity (Art. 15): adequate levels of accuracy, resistance to errors, and protection against attacks.
- Conformity assessment (Art. 43): before placing a high-risk system into service, a conformity assessment must be carried out (in many cases, a self-assessment).
- CE marking (Art. 48): conformant systems must bear the CE marking.
- EU database registration (Art. 71): providers and deployers of high-risk systems must register them in the public EU database before placing them on the market.
- Fundamental Rights Impact Assessment (FRIA) (Art. 27): mandatory for certain deployers of high-risk systems, particularly public bodies and entities providing public services.
August 2026 is less than five months away. If your company operates AI systems that could be classified as high-risk and you haven't started working on compliance, time is very tight. Our EU AI Act training programme is designed precisely to help teams understand these obligations and put an action plan in place before August.
Phase 4 (2 August 2027): remaining obligations
The fourth and final phase closes out the AI Act's rollout. On 2 August 2027, obligations come into force for high-risk AI systems embedded in products regulated by EU harmonisation legislation listed in Annex I of the regulation. These include:
- Machinery (Regulation 2023/1230)
- Toys (Directive 2009/48/EC)
- Recreational craft (Directive 2013/53/EU)
- Lifts (Directive 2014/33/EU)
- Pressure equipment (Directive 2014/68/EU)
- Medical devices (Regulations 2017/745 and 2017/746)
- Civil aviation (Regulation 2018/1139)
- Motor vehicles (Regulation 2019/2144)
If your company manufactures or integrates AI into any of these product types, this is your deadline. For everyone else, August 2026 is the relevant date.
How to classify your AI systems by risk
The AI Act sets four risk levels. Each system's classification determines which obligations apply.
Unacceptable risk (prohibited)
These are the systems listed in Article 5 — covered in Phase 2 above. Subliminal manipulation, social scoring, mass biometric surveillance. These systems cannot operate in the EU under any circumstances (with very narrow national security exceptions).
High risk
This is where most enterprise AI applications that make decisions about people fall. Annex III of the regulation details the categories. The most relevant for companies are:
- HR and recruitment: systems that screen CVs, evaluate candidates, assign tasks, or make decisions about promotions, dismissals or performance reviews.
- Access to essential services: credit scoring, insurance risk assessment, prioritisation of emergency services.
- Education and training: systems that determine access to educational institutions or assess students.
- Biometrics: remote biometric identification systems (those not prohibited).
- Critical infrastructure: AI used to manage water, gas, electricity or transport networks.
- Administration of justice and democratic processes: systems assisting in legal interpretation or electoral management.
- Migration, asylum and border control: risk assessment, document verification, processing of applications.
If your company uses AI to decide who to hire, who to approve for a loan, or how to evaluate employee performance, you are very likely operating a high-risk system.
Limited risk
Systems with transparency obligations. The classic example: chatbots. If a user interacts with an AI system, they must be informed they are talking to a machine. The same applies to deepfakes and AI-generated content: it must be labelled as such.
Minimal risk
Spam filters, AI in video games, content recommendation systems. These systems have no specific obligations under the AI Act, though following voluntary codes of conduct is recommended.
The key category is high risk. If you are unsure where your systems sit, you need a formal assessment. Our AI governance and risk management course walks you through that classification process step by step.
Penalties: up to €35 million or 7% of global turnover
The AI Act has teeth. Penalties are designed to make non-compliance an unviable option.
The enforcement framework has three tiers:
Serious infringements (prohibited practices): up to €35 million or 7% of global annual turnover
Whichever is higher. Applies if you operate an AI system prohibited under Article 5.
Non-compliance with core obligations: up to €15 million or 3% of global annual turnover
Applies for non-compliance with high-risk system obligations (conformity assessment, documentation, human oversight, etc.).
Incorrect information to authorities: up to €7.5 million or 1% of global annual turnover
Applies if you provide false or misleading information to supervisory bodies.
For SMEs and startups, the regulation provides for proportionate fines. But "proportionate" does not mean painless. A 3% turnover fine can be enough to close a mid-sized company.
The AI Act also allows individuals and organisations to file complaints with the national authority. A dismissed job candidate, an employee affected by an AI performance tool, or a consumer association can trigger an investigation against your company.
Action plan: what to do now
If you have read this far, you have a clear picture of what is coming. The question is: what do you do with this information?
After working with dozens of companies on AI Act preparation, this is the action plan I recommend:
Full inventory of AI systems
Map every AI system, tool and application used across your organisation. Include systems your teams are using without formal approval (Shadow AI). You cannot manage what you do not know.
Risk classification for each system
For each system in your inventory, determine its risk level under the AI Act criteria: unacceptable, high, limited or minimal. When in doubt, apply the precautionary principle and classify upwards.
Train your team (if you haven't already)
Article 4 is already in force. Your staff must have adequate AI literacy. Document it. Our EU AI Act compliance and AI governance courses are built for this purpose.
Remove or modify prohibited systems
If any of your systems match the prohibited practices list in Article 5, stop using them. Now. They have been illegal since August 2025.
For high-risk systems: start the conformity process
Implement the risk management system, data governance, technical documentation, automatic logging, human oversight and cybersecurity measures required by Articles 9 to 15. Prepare the conformity assessment and CE marking.
Register your systems in the EU database
Providers and deployers of high-risk systems must register them in the public EU database before August 2026.
Build continuous monitoring into your processes
The AI Act is not a one-off exercise. It requires post-market surveillance, incident management and continuous documentation updates. Integrate this into your existing management systems (ISO 27001, ENS, etc.).
August 2026 looks far away. It is not. Implementing an AI risk management system, documenting every system, training your team, running conformity assessments and registering in the EU database takes months of work. If your company also needs to comply with NIS2 or already holds an ISO 27001 certification, you can integrate the AI Act framework into your existing management system and save significant effort. Companies starting now are cutting it close. Companies that haven't started are already late.
If you need a solid starting point, our training programmes cover both the regulatory framework and practical implementation. And if you would prefer we assess your situation directly, we can run an initial audit to map exactly where you stand and what you still need to do.
The AI Act is not a threat. Managed well, it turns compliance into a competitive advantage. But that requires action. And that action needs to start now.
EU AI Act Training
Is your team ready for August 2026?
Our EU AI Act compliance programme covers the full obligation timeline, risk classification, and practical implementation of high-risk system requirements. Certified training that fulfils Article 4.
View EU AI Act Course →