AESIA gets all the headlines. Every time AI regulation in Spain comes up, it is the Spanish AI Supervisory Agency that appears. That makes sense: it is the EU AI Act's designated body, the one that has generated the most press, the one with the most visible sanctioning power.
But there are two other bodies that can knock on your door if you use AI in your company. And many organisations do not realise this until they are already dealing with it.
This is not regulatory theory. All three have active jurisdiction, real enforcement capacity, and have already acted against companies in cases involving automated technology. The difference is that AI is now embedded in nearly every business process, which significantly widens the exposure window.
AESIA: the AI systems regulator
The Spanish Artificial Intelligence Supervisory Agency is the national competent authority Spain has designated for applying the EU AI Act. It became operational in 2023, but its full sanctioning power did not kick in until August 2025.
With around 30 professionals on staff, it might look small. But it has the capacity to open investigations, request documentation, impose precautionary measures and apply penalties. And that capacity is growing: the agency's budget is expanding and so is its team.
What AESIA monitors is EU AI Act compliance: whether the AI systems your company uses are correctly classified, whether high-risk ones meet documentation and transparency requirements, and whether there is adequate governance over how AI is used across the organisation.
One point few people know: AESIA also supervises compliance with Article 4, which requires that all staff working with AI systems have a sufficient level of AI literacy. This is not only for companies that build AI. It applies to any company that uses it.
EUR 35M
Maximum EU AI Act penalty for the most serious violations: EUR 35 million or 7% of global annual turnover, whichever is greater.
For Article 4 violations (AI literacy), penalties reach up to EUR 7.5 million or 1% of global turnover. Not a symbolic amount for a company with EUR 30 or 50 million in annual revenue.
AEPD: enters if your AI processes personal data
The Spanish Data Protection Authority had jurisdiction over companies long before AESIA existed. And it still does. The point is that most AI systems companies now deploy process personal data, which automatically brings the AEPD into play.
The typical cases are not exotic. A chatbot interacting with customers handles personal data. A scoring system evaluating applications processes data about real people. An HR tool with AI that filters candidates or assesses employee performance works with particularly sensitive data. Smart CCTV in offices or shops captures biometric data.
In all of these scenarios, the GDPR applies on top of the EU AI Act. And the AEPD has the authority to investigate and sanction any data processing that does not meet the required safeguards.
What makes this especially important is the regulatory overlap: if your AI system is high-risk under the EU AI Act and processes personal data, you can receive an inspection from two regulators at the same time. AESIA looks at AI Act compliance. AEPD looks at data processing. They are not mutually exclusive.
GDPR fines: up to EUR 20 million
Serious GDPR infringements can result in fines of up to EUR 20 million or 4% of global annual turnover, whichever is greater. The AEPD has already sanctioned companies for using automated decision-making systems without adequate safeguards.
The GDPR has specific requirements for automated decision-making that affects individuals: the right to an explanation, the right to contest a decision, and in many cases, a mandatory impact assessment. If your AI makes or influences decisions about people (customers, employees, candidates) without these safeguards, the AEPD can act.
ITSS: Labor Inspection and AI in HR
This is the regulator companies least expect. The Labor and Social Security Inspectorate (ITSS) has jurisdiction over any use of AI that affects labor relations. And that scope is broader than most people assume.
Specific cases include: recruitment algorithms that filter job applications, automated performance evaluation systems, AI-powered employee monitoring tools, or any process where an algorithmic decision affects someone's employment conditions.
The legal basis is Article 22 bis of the Workers' Statute, introduced by Royal Decree-Law 9/2021. It establishes that workers have the right to know the parameters, criteria and rules underpinning algorithms or AI systems that may affect their working conditions. This is not a theoretical right: workers' representatives can formally demand this information.
Article 22 bis of the Spanish Workers' Statute
"The company shall inform workers' representatives of the parameters, rules and instructions on which algorithms or artificial intelligence systems that affect employment decisions are based, including profiling."
Royal Decree-Law 9/2021, Article 22 bis ET.
The ITSS has already acted against gig economy platform companies for the opacity of the algorithms managing delivery workers. Those precedents matter: the case law and administrative doctrine are gradually extending to more conventional sectors.
If your company uses AI to make or support decisions about hiring, evaluating, promoting or dismissing people, the ITSS has the authority to inspect whether you are meeting algorithmic transparency obligations and applicable labor safeguards.
The triple regulator scenario
A concrete example makes this clearer.
An insurance company uses an AI system to evaluate health insurance applications. The system classifies applicants, applies risk criteria and recommends premiums or rejections. There are HR team members who previously handled that process manually and now work with the tool.
First front: AESIA. The system is likely high-risk under Annex III of the EU AI Act (AI systems used to assess individuals' creditworthiness or risk classification). It needs technical documentation, a conformity record, human oversight mechanisms and, critically, Article 4 AI literacy training for all staff who use it.
Second front: AEPD. The system processes personal data about applicants, potentially including health data, which is a special category under the GDPR. It requires a Data Protection Impact Assessment (DPIA), a robust legal basis, non-discrimination safeguards and mechanisms allowing affected individuals to challenge automated decisions.
Third front: ITSS. Employees who previously handled the process have the right to know the parameters of the algorithm that now influences their work. If the system affects how their performance is assessed or implies a restructuring of their roles, the union representatives can demand information and the ITSS can verify compliance.
Three possible simultaneous inspections, for the same system. This is not a hypothetical scenario. It is the current legal framework applied to a completely ordinary case in the Spanish insurance sector.
Article 4 as a shield
Of all the requirements these three regulators can demand, there is one that cuts across all three fronts and reduces exposure to all of them at once: Article 4 of the EU AI Act.
Article 4 requires all AI system deployers to ensure their staff have a sufficient level of AI literacy. It entered into force in February 2025 and the effective compliance deadline is August 2026. It has not been postponed. It has not changed.
Before AESIA, documented and certified training demonstrates the company takes AI governance seriously. It is the first document they will ask for in an inspection.
Before the AEPD, a workforce trained in the responsible use of AI and data protection is evidence of the technical and organisational measures that GDPR requires to demonstrate accountability.
Before the ITSS, employees who are trained on the systems they use and their implications are better positioned to satisfy the algorithmic transparency obligations of Article 22 bis of the Workers' Statute.
First question in any AI inspection
Has your company documented which AI systems it uses and who controls them? If the answer is no, that is the first risk you need to address. Without that inventory, none of the three regulators can receive a satisfactory response.
FUNDAE-subsidised training covers 100% of the cost for Spanish companies with accumulated credit. This is not optional: it is the most concrete, most visible and most defensive step a company can take before all three regulators.
If you are not sure where to start, the first step is always basic team training. Get in touch and we will help you structure what each role profile needs.
EU AI Act Training ยท Article 4
Reduce your exposure before all three regulators
Our Secure AI Application for Business course covers Article 4 requirements and is designed so your team understands the real regulatory risks. Available with 100% FUNDAE subsidy. Certificates per participant included.
View Training Programme โ