AI Agents Cybersecurity Training Insights Let's talk
🇪🇸 ES 🇬🇧 EN CA
EU AI Act · SMEs 24 March 2026 11 min read

EU AI Act for SMEs: practical step-by-step compliance guide

Over 90% of European SMEs use some form of artificial intelligence: chatbots, automated marketing tools, CV filters, recommendation engines, code assistants. If your company is one of them, the EU AI Act applies to you. This article is your practical guide: what to do, in what order and with what resources.

CS
Carlos Salgado CEO & Co-founder · Delbion

When people talk about the EU AI Act, most articles focus on large corporations, high-risk systems and AI model providers. But the reality is that most obligations also apply to SMEs. Especially Article 4, which requires AI literacy for all staff and has been in force since February 2025.

This article is for you if you run a company with 5 to 250 employees, you use AI tools (even if it is just ChatGPT or a CRM with AI features), and you need to know what you have to do to comply without spending a fortune.

Does the EU AI Act affect my SME?

Yes. With nuances, but yes.

The EU AI Act applies to three profiles:

  • Providers of AI systems (those who develop them or place them on the market).
  • Deployers: those who use AI systems in their professional activity. This includes any company that uses tools with AI.
  • Importers and distributors of AI systems.

If your company uses a chatbot for customer service, a lead scoring system, an AI content generation tool, an automated CV filter or any tool that uses language models, computer vision or machine learning, you are a deployer and you have obligations.

Good news for SMEs: The European Commission has committed to specific support measures: simplified technical documentation, dedicated consultation channels, regulatory sandboxes and adapted training. But that does not exempt you from compliance. It only makes the process easier.

What counts as an AI system

The EU AI Act defines "AI system" broadly: a machine-based system that operates with varying levels of autonomy, that can adapt after deployment, and that, based on the data it receives, generates outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.

In practice, this includes:

Tool Is it an AI system? Example use in an SME
ChatGPT / Claude / Gemini Yes Generate emails, summarise documents, customer service
CRM with lead scoring (HubSpot, Salesforce) Yes Classify leads by close probability
Automated CV filter Yes (high risk) Automatically shortlist candidates
AI marketing tools (Jasper, Copy.ai) Yes Generate copy, segment audiences
Excel with formulas No Manual calculations and filters
GitHub Copilot / code assistants Yes Automatically generate and review code
Video surveillance with facial recognition Yes (high risk) Biometric access control

Classify your systems by risk level

The EU AI Act classifies AI systems into four risk levels. Your obligations depend on where the systems you use fall:

!

Unacceptable risk (prohibited)

Social scoring, subliminal manipulation, emotion recognition in the workplace/schools, real-time mass biometric surveillance. Your SME probably does not use any of these. If it does, it must stop immediately.

A

High risk

Automated CV filtering, credit scoring, AI systems in healthcare, education, justice administration, critical infrastructure management. If you use AI to make or influence decisions about people, check Annex III of the regulation. There are heavy obligations: conformity assessment, technical documentation, human oversight, registration in the EU database.

T

Limited risk (transparency obligations)

Chatbots, AI content generation, deepfakes. The main obligation is to inform the user that they are interacting with an AI or that the content was generated by AI. If your chatbot serves customers, they must know they are talking to a machine.

M

Minimal risk

Spam filters, product recommendations, code assistants for internal use. No specific obligations beyond Article 4 (literacy). Most AI use in SMEs falls here.

The key point: even if all your systems are minimal risk, Article 4 requires you to ensure that your staff have a sufficient level of AI literacy. This has been in force since February 2025.

Obligations already in force

Since 2 February 2025, two obligations apply to all companies:

Article 4: AI literacy

You must ensure that all people who work with AI systems in your organisation have a sufficient level of knowledge about how these systems work, their limitations, risks and opportunities. This includes:

  • Employees who use AI tools in their daily work.
  • Executives who make decisions about AI adoption.
  • IT staff who manage or integrate AI systems.

No specific training format is required, but it must be demonstrable. That is, if an inspector asks, you need to be able to prove that you have trained your team.

FUNDAE covers 100%. AI training for companies can be subsidised through FUNDAE (Spain's national employment training foundation). For most SMEs, the real cost is EUR 0. Unused FUNDAE credits expire at the end of the year, so it is money you lose if you do not use it. Our Secure AI Application for Business course meets Article 4 requirements and is fully managed through FUNDAE.

Article 5: Prohibited practices

Since August 2025, it is prohibited to use AI systems for:

  • Subliminal manipulation that causes harm.
  • Exploitation of vulnerabilities of specific groups (age, disability).
  • Social scoring by public authorities.
  • Real-time mass biometric surveillance in public spaces (with security exceptions).

Most SMEs do not use these systems. But it is worth checking: if you use facial recognition for office access control, it could fall into a grey area.

What comes in August 2026

2 August 2026 is the big deadline. Obligations for high-risk systems come into force:

  • Risk management system for each high-risk AI system.
  • Data governance (quality, representativeness, absence of bias).
  • Complete technical documentation.
  • Automatic activity logging.
  • Transparency and user information.
  • Effective human oversight.
  • Accuracy, robustness and cybersecurity.
  • Conformity assessment and CE marking (for providers).
  • Registration in the EU public database.

If you are a deployer (you use high-risk systems but do not develop them), your obligations are lighter but real: human oversight, use in accordance with the provider's instructions, monitoring and incident reporting.

35M €

Maximum fine for prohibited practices (Art. 5) or 7% of global turnover

15M €

Maximum fine for non-compliance with high-risk obligations or 3% of global turnover

7.5M €

Maximum fine for providing incorrect information or 1% of global turnover

For SMEs and startups, fines are applied at the lower figure (percentage of turnover), which reduces the financial impact. But it does not eliminate it. A fine of 3% of annual turnover can be devastating for an SME.

7-step action plan

This is what we recommend to the SMEs we work with:

1

Inventory all the AI systems you use

Include SaaS tools, plugins, APIs and any component that uses AI. Do not forget informal usage: employees using ChatGPT on their own, for example. Document what each one is used for and who uses it.

2

Classify each system by risk level

Use the Annex III table of the regulation as a reference. If you have doubts, the European Commission offers a free AI Act Compliance Checker for SMEs.

3

Train your team (Art. 4) - URGENT

This should already be done. If you have not done it, it is the first priority. A 10-hour training programme covers the fundamentals. FUNDAE covers 100% of the cost. There is no excuse.

4

Create an AI usage policy

Document which AI tools are approved, what they can be used for, what data can be entered and what cannot, and who is responsible for overseeing their use. It does not need to be a 50-page document. 2-3 clear pages are enough.

5

Implement transparency in interaction systems

If you use chatbots, they must inform the user that they are talking to an AI. If you generate content with AI (images, text, audio), you must label it. This is mandatory for limited-risk systems.

6

Review your AI providers

If you use high-risk systems provided by third parties, verify that the provider meets its obligations (CE marking, technical documentation, conformity assessment). Your responsibility as a deployer includes using the system in accordance with the provider's instructions.

7

Document everything

Compliance is demonstrated through documentation. Keep records of the training delivered, the systems inventory, the risk classifications, the usage policy and any assessments you carry out. If an inspection comes, this is what they will ask for.

How much does compliance cost

It depends on the complexity of your AI usage, but for a typical SME:

Action Estimated cost Notes
Article 4 training (10h per person) EUR 0 With FUNDAE credit. Without FUNDAE: EUR 75/person
Systems inventory and classification EUR 0 - 500 Can be done internally with guidance
AI usage policy EUR 0 - 1,000 Template + customisation. Cheaper than starting from scratch
Transparency for chatbots/content EUR 0 - 200 Minor interface changes
Full assessment (if you use high risk) EUR 1,500 - 5,000 External consultancy recommended

For the vast majority of SMEs that only use minimal or limited-risk AI, the compliance cost is minimal: training (free with FUNDAE), usage policy (a single document) and transparency (a notice on the chatbot). There is no excuse not to do it.

Common mistakes SMEs make

  • "The AI Act does not affect me because I am small." False. Article 4 applies to all companies that use AI, regardless of size.
  • "We already use AI safely, we do not need training." The obligation is not to use AI safely. It is to demonstrate that your staff have a sufficient level of literacy. Without documented training, you cannot prove it.
  • "My employees use ChatGPT on their own, that does not count." If an employee uses AI in a work context, the company is responsible. Shadow AI is a real regulatory risk.
  • "I can wait until August 2026." Article 4 is already in force. And preparing for high-risk obligations (if applicable) takes months. It is not something you do in two weeks.
  • "FUNDAE is complicated and not worth the hassle." We handle all the FUNDAE paperwork. The process for the company requires zero effort. And the saving is 100%.

EU AI Act training for SMEs

Comply with Article 4 for EUR 0

Our Secure AI Application for Business course covers Article 4 requirements: how AI works, risks, limitations, responsible use and the legal framework. 10 hours, 100% subsidised by FUNDAE. We handle all the paperwork.

View Secure AI Application for Business Course →

Next step

Get your SME ready for the EU AI Act

Secure AI training 100% subsidised by FUNDAE, usage policies, systems inventory and risk assessment. Everything you need to comply with the European artificial intelligence regulation.