AI Agents Cybersecurity Training Insights Let's talk
🇪🇸 ES 🇬🇧 EN CA
Compliance & Security January 9, 2026 8 min read

AI Risks Your Company Should Know: Does Your Business Need an AI Strategy?

Companies are adopting AI at full speed but are unaware of the cybersecurity risks and EU AI Act penalties. Auditing, classification and compliance are urgent.

CS
Carlos Salgado CEO & Co-founder · Delbion

This week I received the call that always makes me uncomfortable: "Carlos, we've been asked for documentation about our AI systems and we don't even know what we have or how to classify it. Can they fine us?"

The short answer: yes. And it could mean the end of your business.

At Delbion we have been auditing systems for over 20 years and, over the last year, the pattern repeats: companies that have adopted AI at full speed (ChatGPT for customer service, machine learning automations, predictive analytics...) but have no idea of the cybersecurity risks they are taking on.

And worse: they also don't know that the European AI Act is already here, with fines of up to €35 million.

Sound familiar? Keep reading.

Your HR Person Just Uploaded 50 CVs to ChatGPT

Two months ago I audited a company that was using generative AI to "streamline processes". Sounds good, right?

Until we discovered that the HR team was copying full CVs into ChatGPT to generate summaries. Confidential contracts in AI tools to draft emails. Sales data in free platforms to create reports.

The problem: none of those tools had a DPA (Data Processing Agreement), and the data was being used to train third-party models.

Result: potential GDPR violation. If any of that data is leaked or misused, the company is liable.

Shadow AI: The Marketing Department Installed 3 AI Add-ons Without Telling IT

Remember when we talked about "Shadow IT"? Those systems that departments implement without IT knowing about it.

Now we add a new layer: Shadow AI.

AI-integrated browser extensions. Chrome plugins for automatic writing. Third-party APIs connected to the CRM or ERP without security review.

In my last audit we found 7 AI tools that no one in IT knew about. Seven. And some had known prompt injection vulnerabilities (essentially, manipulating AI instructions to make it do things it shouldn't).

The risk? Backdoors wide open.

"I Didn't Know This Was High-Risk" — Fine: €35 Million

The EU AI Act is not the future. It is the present.

Since 2024 there have already been obligations in force and, by 2027, the full framework will be implemented.

What many companies don't know is that their AI systems probably fall into the "high-risk" category without them realising it.

  • Do you use AI to screen job candidates? High risk.
  • Do you have an automated credit scoring system? High risk.
  • Do you use AI to evaluate employee performance? High risk.

And if you misclassify it... you could face fines of up to €35 million or 7% of annual global turnover (whichever is greater). This is no joke. We are seeing it in companies that thought they were "doing fine".

Key fact: The AI Act does not only apply to AI developers. If your company uses AI tools in any professional context, you are considered a deployer and you have compliance obligations right now.

How to Protect Your Company (Without Dying Trying)

After 20 years auditing and certifying companies under ISO 27001, ENS and now the AI Act, experience shows that the solution is not complicated. It is methodical.

Your AI survival checklist:

1

AI Systems Inventory

List EVERYTHING that uses AI in your organisation (yes, including those plugins marketing installed without asking).

2

Risk Classification

Does your AI make decisions about people? Evaluate if it's high-risk under the AI Act.

3

Clear Usage Policies

Put in writing what employees CAN and CANNOT do with AI tools.

4

ISO 27001 Access Controls

Implement A.9.1 controls to limit who uses which tools.

5

FRIA (Fundamental Rights Impact Assessment)

Mandatory for high-risk systems.

6

Auditable Documentation

Record all decisions made by AI systems (the AI Act will require records of up to 10 years).

7

AI Contingency Plan

If your chatbot "goes rogue" or there's a data breach, what do you do? NIS2 requires notification within 24 hours.

If you answered "we don't have this" for more than two points, you need an audit. And you need it now.

Secure AI is Not Optional: It's Your Competitive Advantage

There is a pattern that repeats:
Companies that get ahead of regulation win.
Those that wait for the fine to drop... lose time or, worse: money and reputation.

At Delbion we work on the principle of audit + implement + certify. We don't just tell you what's wrong. We help you fix it and prove it with recognised certifications (ISO 27001, ENS, EU AI Act compliance).

Because, in the end, what matters is not just compliance. It is building AI systems that your customers can trust.

Free Assessment

Is Your Company Ready for the EU AI Act?

In 60 minutes we audit your AI inventory, classify the risk of each system and deliver a concrete action plan to comply with European regulations — before it's too late.

Request Free Audit →

Next step

Does Your Company Comply with the European AI Act?

Request a free AI audit with our team. We inventory your systems, classify the risk and deliver a clear action plan to comply with regulations — before the penalties arrive.