AI Agents Cybersecurity Training Insights Let's talk
๐Ÿ‡ช๐Ÿ‡ธ ES ๐Ÿ‡ฌ๐Ÿ‡ง EN CA
Mandatory AI Training: What Article 4 of the EU AI Act Requires
Compliance & AI 23 March 2026 8 min read

Mandatory AI Training: What Article 4 of the EU AI Act Requires From Your Company

The EU AI Act requires every organisation using AI to ensure staff AI literacy. Here is what Article 4 says, who it affects, the deadlines, the penalties, and exactly what you need to do before enforcement kicks in on 2 August 2026.

CS
Carlos Salgado CEO & Co-founder ยท Delbion

Let me get straight to the point: if your company uses any AI system (and it almost certainly does, even if you are not aware of it), you have had a legal obligation since 2 February 2025. Not upcoming. Active. Right now.

It is called Article 4 of the EU Artificial Intelligence Act (Regulation 2024/1689), and it establishes something that has caught most companies completely off guard: the obligation to ensure that all staff working with AI have a sufficient level of AI literacy.

It is not a recommendation. It is not a best practice. It is a legal obligation with multi-million euro penalties. And yet, the majority of companies in Europe do not even know it exists.

Over the past few weeks, I have received several calls from clients asking exactly the same thing: "Carlos, we have been told we need to train our team on AI by law. Is that true?"

Yes. It is true. And in this article I am going to explain exactly what the regulation says, who it affects, what the consequences of non-compliance are and, most importantly, what you can do today to get up to speed.

Feb 2025 Article 4 in force since
Aug 2026 Enforcement begins
โ‚ฌ35M Maximum fines under AI Act

What Article 4 actually says

Article 4 of the EU AI Act is titled "AI Literacy" and is surprisingly short. In essence, it states:

"Providers and deployers of AI systems shall take measures to ensure, to their best extent possible, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf."

Let us break this down, because every word matters:

"Providers and deployers": this does not only affect companies that build AI. It affects any company that uses it. If your sales team uses a CRM with predictive AI features, your company is a "deployer".

"Shall take measures": this is a mandate, not a suggestion. The regulation requires you to take concrete, documentable actions.

"Staff and other persons": this includes employees, external collaborators, subcontractors and anyone who interacts with AI systems within your organisation.

"Sufficient level of AI literacy": here lies the key. It is not about turning everyone into a machine learning engineer. It is about ensuring each person understands the fundamentals, the risks and the limitations of the AI they use in their daily work.

The regulation adds that this level must account for "their technical knowledge, experience, education and training, as well as the context in which the AI systems are to be used." In other words: training must be proportional to the role and the risk.

Who it affects: not just tech

This is where most companies get it wrong. When they hear "AI regulation" they think of Silicon Valley startups or big tech. The reality is very different.

Article 4 affects any organisation that uses AI systems, regardless of size or sector. And in 2026, that includes virtually every company:

Law firms using AI-powered document review tools. Hospitals and clinics with diagnostic support systems. Logistics companies with automated route optimisation. HR departments filtering CVs with algorithms. Marketing teams generating content with generative AI tools. Finance departments with prediction or scoring models.

If anyone in your company uses ChatGPT, Copilot, Gemini or any other AI tool (even the free version), Article 4 applies to you.

โš ๏ธ

In our audits, the average number of undocumented AI tools we find per company is between 5 and 12. Most installed by employees autonomously, without IT or management knowledge. Each one of those tools generates an AI literacy obligation.

What AI literacy means under the Act

The EU AI Act itself defines the term in Article 3, paragraph 56:

"Skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems and to gain awareness of the opportunities and risks of AI and possible harms it can cause."

In practical terms, your staff must be able to:

1

Understand what AI is and is not: distinguish between simple automation and real AI systems, and grasp the basics of how they work.

2

Identify risks: know that AI can generate incorrect outputs (hallucinations), that it can have biases, that data can leak, and that there are legal implications in its use.

3

Know the regulatory framework: have a basic understanding of the AI Act, GDPR as applied to AI, and the company's internal policies.

4

Use AI responsibly: apply security best practices, avoid entering sensitive data into unauthorised tools, and know when to escalate a decision involving AI.

5

Supervise results: never accept an AI output without human review, especially if it affects people or critical business decisions.

We are not talking about a master's degree in data science. We are talking about a basic but solid level of understanding that enables each person on your team to use AI with sound judgment. That said: it must be documented, verifiable and adapted to each worker's profile.

Deadlines and penalties

This is where many companies relax thinking "there is still time." Wrong.

Article 4 has been in force since 2 February 2025. It is not a future obligation: it is a present one. If you do not have an AI literacy plan for your staff today, you are already non-compliant.

For perspective, the AI Act timeline has several phases:

February 2025: Prohibited practices + AI literacy (Art. 4)

Already in force. AI prohibitions (social scoring, subliminal manipulation, etc.) and the literacy obligation apply from this date.

2

August 2025: Obligations for general-purpose AI models

Rules for foundation model providers (OpenAI, Google, Meta, etc.) come into effect.

3

August 2026: High-risk obligations + enforcement of Article 4

Companies using high-risk AI (HR, credit scoring, healthcare, education, justice) must comply with the full catalogue: impact assessments, technical documentation, human oversight, transparency and more. National authorities begin enforcing AI literacy requirements.

In terms of penalties, the AI Act establishes three tiers of fines:

Up to 35 million euros or 7% of global annual turnover (whichever is greater): for prohibited AI practices.

Up to 15 million euros or 3% of global turnover: for non-compliance with the Regulation's main obligations, including high-risk requirements.

Up to 7.5 million euros or 1.5% of global turnover: for supplying incorrect information to authorities.

๐Ÿ“‹

Civil liability is already active. Since August 2025, if an untrained employee causes harm using an AI system (leaks client data, makes discriminatory decisions based on an algorithm), your organisation is liable. The absence of a documented literacy programme makes any defence extremely difficult.

Download the free checklist

Assess whether your company complies with Article 4 of the EU AI Act

How to comply: action plan

After helping dozens of companies prepare for the AI Act, we have developed a practical five-step approach that works for organisations of any size.

1

AI inventory and exposure profiles

Before training anyone, you need to know what AI tools are used in your organisation and who uses them. Do a complete inventory: official tools, plugins, browser extensions, APIs integrated into your systems. Then classify your staff by their level of AI interaction: basic users, advanced users, deployment managers and executive team. Each group needs a different level of training.

2

Baseline training for the entire organisation

All staff need a minimum level of literacy: what AI is, what its risks are, what the regulations say, what the internal usage policies are. This training does not need to be long (4-6 hours is usually enough for the base level), but it must be practical, use real cases and be adapted to your sector. A generic "intro to AI" course does not meet Article 4 requirements if it fails to address risks and the regulatory framework.

3

Specialised training by profile

AI deployment managers, technical teams and the executive team need additional, specific training. Technical staff must understand risk assessments and technical documentation. Leadership must understand their legal responsibilities and how to oversee AI use across the organisation. Departments like HR, legal or customer service that use AI intensively need training specific to the risks in their area.

4

Documentation and evidence

The AI Act is an accountability-based regulation. Training your team is not enough: you must be able to prove it. Record attendance, content delivered, assessments completed and dates. Keep evidence for at least as long as you use the AI systems (and preferably longer). If an authority asks you to demonstrate Article 4 compliance, you need a solid dossier.

5

Continuous updates

AI evolves every week. A literacy programme cannot be a one-off course that is delivered once and filed away. You need a periodic update plan: at minimum, a biannual content review and an annual refresher session for all staff. When you adopt new AI tools, training must be updated to cover them.

The important thing is to start. If you have nothing today, the worst thing you can do is wait for the perfect plan. A basic documented programme is infinitely better than having nothing at all.

At Delbion, we designed our Secure AI Application for Business course precisely to address this need. It is a practical programme covering essential AI concepts, the AI Act regulatory framework, cybersecurity risks and usage best practices, all adaptable to the sector and profile of each organisation. It meets Article 4 requirements and generates the documentation and evidence needed to demonstrate compliance.

I am not going to tell you it is the only option. There are consultancies, e-learning platforms and university programmes that can work. What I will tell you is that whichever route you choose, make sure the programme:

  • Specifically covers the risks and regulatory framework (not just "how to use ChatGPT")
  • Is adapted to your organisation's profiles
  • Generates verifiable documentary evidence
  • Includes a periodic update component

The Article 4 obligation is not going away. And with high-risk obligations arriving in August 2026, companies that have not covered the literacy baseline will face a much bigger problem when they need to document human oversight, impact assessments and quality management systems.

AI training is not a cost: it is the first step to complying with the AI Act without surprises. The best time to start was a year ago. The second best time is today.

Certified Training

Secure AI Application for Business: comply with Article 4

Our AI literacy programme covers Article 4 requirements of the EU AI Act: essential concepts, risks, regulatory framework and best practices. Adapted to your sector, with assessment and documentation included.

View full programme →

Next step

Train your team before enforcement begins

Explore our AI training programmes: from the basic literacy required by Article 4 to advanced training for technical and executive teams. Adapted to your sector, with accreditable documentation.