AI Agents Cybersecurity Training Insights Let's talk
๐Ÿ‡ช๐Ÿ‡ธ ES ๐Ÿ‡ฌ๐Ÿ‡ง EN CA
Cybersecurity 6 April 2026 8 min read

If deepfakes can already impersonate Spain's Guardia Civil, your business is the next target

A fake AI-generated profile deceived citizens by impersonating Spain's national police force. The same mechanism is used against businesses. Here is what you need to know and how to protect your team before August 2026.

CS
Carlos Salgado CEO & Co-founder · Delbion

Fake profiles on social media. AI-generated photographs. Videos featuring Guardia Civil uniforms, an institutional tone, and warning messages that look entirely official. Behind them, fraudsters extracting personal data from citizens who trust what they see.

This is not science fiction. It happened in Spain, it is documented, and the Guardia Civil itself had to issue public warnings urging citizens to verify the authenticity of profiles before engaging with them.

The most unsettling detail is not that it happened. It is what the source said: today's deepfakes are "practically imperceptible to the untrained eye". Two years ago there were visible flaws. Today there are not.

If a state institution can be convincingly impersonated, your company faces exactly the same risk. And most organisations do not realise until it is too late.

Spain's Guardia Civil impersonated by deepfake

The mechanism is precise. Attackers create social media profiles using AI-generated photographs: correct uniforms, institutional settings, faces that belong to no real person yet look entirely convincing. The profiles replicate the formal tone and communication style of verified official accounts.

Once live, the profiles offer "security advice", publish threat alerts and send direct messages requesting personal data under the pretext of fictitious investigations. The goal is always the same: data, money, access.

What has changed is the production scale and output quality. An attacker can create a convincing profile in hours with no specialist expertise and no production team. Just the AI tools available to anyone.

$25M

Amount transferred by the CFO of a Hong Kong company after a real-time deepfake video call with the "CEO" and several company executives. Documented in 2024, it is the first major public case of CEO fraud using live deepfake technology.

The same attack, against your business

The Guardia Civil deepfake targeted individual citizens. But the technology is identical to what is used against companies, and the payoff is far larger.

CEO fraud via voice deepfake. An employee receives a phone call from the CEO requesting an urgent, confidential transfer. The voice is identical. The tone is right. The context is plausible. The call lasts 90 seconds. The money leaves the account before anyone can verify anything. This type of attack is already documented in Spain and across Europe.

Real-time video deepfakes. The Hong Kong case in 2024 changed risk perception: a multinational's CFO attended a video call with several colleagues including the CEO. All of them were real-time deepfakes. Result: $25 million transferred. The fraud was not detected until after the fact.

AI-generated phishing. Phishing emails used to contain grammatical errors, obvious machine translations, generic contexts. Not anymore. Current language models generate perfectly personalised emails: they know the recipient's name, role, company and recent projects. They pass every spam filter. There is no visible warning sign.

Supplier and client impersonation. An email from the usual supplier with updated payment instructions. A message from the client with an urgent request. All generated by AI, all perfectly imitated, all designed to trigger action before anyone verifies anything.

The pattern is always the same

Urgency plus authority plus a trusted channel. AI did not invent fraud. What has changed is that any attacker can now replicate all three elements perfectly without specialist skills or significant resources.

Why your team will not see it coming

If citizens cannot distinguish a Guardia Civil deepfake profile from a real one, there is no reason to assume that company employees will detect an AI-generated phishing email or a cloned CEO voice call.

The problem is not intelligence. It is the absence of specific training. A team that does not understand how generative AI works, does not know the warning signs of synthetic content and has no verification protocols for urgent requests is a vulnerable team by definition.

Vision Compliance published in April 2026 that 83% of organisations have no formal inventory of the AI systems they use or deploy. If a company does not know what AI it has inside, it is very unlikely to recognise the AI being used against it from outside.

The gap is both technical and human. Deepfake detection tools exist but are not foolproof and require active implementation. The human factor is the most exploited precisely because it is the most unprotected.

Warning signs your team should recognise

Urgent requests that bypass usual channels. Bank detail changes sent by email without phone confirmation. Video calls where the person blinks rarely or audio is slightly out of sync. Emails with excessive personalisation but no verifiable signature. None of these require specialist software. They require training.

What the EU AI Act requires

Article 4 of the EU AI Act does not only address corporate governance. It requires that everyone working with AI systems has a sufficient level of AI literacy. And that level necessarily includes the ability to recognise AI-generated content.

The obligation has been in force since February 2025. The compliance deadline with active penalties is August 2026. Four months remain.

Penalties for non-compliance with Article 4 reach up to 7.5 million euros or 1% of global annual turnover. But the regulatory sanction is the smaller scenario. The larger scenario is the CFO who transfers 25 million dollars because nobody trained them to verify a video call.

AESIA has authority to inspect whether training exists, whether it is documented and whether it covers the obligations of Article 4. Having given "an AI course" is not sufficient. It must be specific training with content relevant to the role and evidence that it was completed.

7.5M EUR

Maximum penalty for non-compliance with Article 4 of the EU AI Act (AI literacy). Or 1% of global annual turnover, whichever is greater. Deadline: August 2026.

Training as the first line of defence

AI safety training is not just a compliance requirement. It is the first line of defence against attacks that use AI as a vector.

A trained team understands how generative AI works, knows its limitations and warning signs, has protocols to verify unusual requests and understands why urgency and authority are the two factors attackers exploit first. No specialist software is needed to detect 80% of attempts. Just the knowledge of what to look for.

The same knowledge that satisfies Article 4 of the EU AI Act is what protects your company from a deepfake CEO fraud. They are not two different things. They are the same thing viewed from two angles.

Spanish companies with FUNDAE credit can cover 100% of the training cost. There is no budget excuse. The only real cost is staff time, measured in hours, not days.

EU AI Act Training ยท Article 4

Train your team before the deepfake arrives

Our Secure AI Application in Business course covers recognition of AI-generated content, current attack vectors and Article 4 obligations. 100% subsidised with FUNDAE credit. Certificates per participant included.

View Training Programme โ†’
FUNDAE subsidised training

Your team needs secure AI training

The EU AI Act requires AI literacy for all staff from August 2026. Our courses cover compliance, AI agents and governance. FUNDAE can subsidise 100% of the cost.

View available courses 0 EUR cost with FUNDAE credit

Next step

Your team needs to recognise what AI can fabricate

Article 4 training is not just compliance. It is the difference between a team that detects the deepfake and one that transfers the money. 100% subsidised with FUNDAE. Start now.