"I haven't developed AI, this doesn't concern me."

I hear this sentence every week since the AI Act was published. It's wrong, and it potentially costs €15 million or 3% of worldwide turnover to an SMB that figures it out too late.

You can be subject to the AI Act without having coded a single line. It's enough to have deployed an AI tool in your business — a chatbot, an HR software that screens CVs, automated client scoring, a support agent connected to your data. Most SMBs are in this situation, without knowing it.

Good news: the heaviest obligations come into force in August 2026, not before. There are 4 months left to get compliant if you're concerned. Here's the checklist I apply with my clients.

What the AI Act says in 30 seconds

European regulation 2024/1689 classifies AI systems into 4 risk levels and imposes obligations proportional to each level.

LevelExamplesStatusObligations
Unacceptable riskSocial scoring, behavioural manipulationProhibited since February 2025Use impossible
High riskHR (CV screening), credit scoring, biometricsAllowed with heavy obligationsDocumentation, human oversight, logs, evaluation
Limited riskChatbot, content generationAllowed with transparencyInform the user they're talking to AI
Minimal riskSpam filter, product recommendationsAllowed freelyNo specific obligations

The key: most SMBs are in limited or minimal risk. Only those using AI on sensitive decisions (HR, credit, health, legal, education) fall into high risk. But knowing where you stand is your responsibility, not your vendor's.

2026 timeline — what applies when

DateWhat comes into force
February 2025Bans (unacceptable risk uses) — already effective
August 2025Obligations on general-purpose models (GPT, Claude, etc.) — on vendors
August 2026Obligations on high-risk systems — on you if concerned
August 2027Extended obligations on certain specific cases

The deadline that matters for an SMB is August 2026. If you deploy a high-risk AI system, you must have everything ready before.

Penalties — the figures that get managers moving

  • Prohibited practices: up to €35M or 7% of worldwide turnover.
  • High-risk non-compliance: up to €15M or 3% of worldwide turnover.
  • False information to authorities: up to €7.5M or 1% of worldwide turnover.

For an SMB with €5M turnover, we're potentially talking about €150,000 in penalties. The French authorities (mainly the CNIL) have already announced they would start progressively — but "progressively" means "we start with the most visible". Don't bet on it.

Are you a deployer under the AI Act?

You are a deployer if you use an AI system in your professional activity, even if you didn't develop it. Concretely, if you use:

  • An automated CV screening tool (even if it's a feature of your ATS).
  • Automatic client risk scoring (credit, solvency).
  • An AI agent making operational decisions (validating a request, closing a ticket).
  • A biometric system (facial recognition, fingerprints).
  • An AI-based employee evaluation tool.

→ you are a deployer of a potentially high-risk system.

You are not subject to high-risk obligations if you only use:

  • A general-purpose chatbot on your site (limited risk — transparency is enough).
  • ChatGPT or Claude as a personal productivity tool (not integrated into your business decisions).
  • A spam filter, a spell checker, a product suggestion.

The 5 concrete steps to tick before August 2026

1. Inventory of your AI systems (2 to 5 days of work)

List every AI system used in your company. Everything, including AI features in your standard SaaS. Often-forgotten examples:

  • "Automatic lead scoring" in your CRM.
  • "Candidate recommendation" in your ATS.
  • "Churn prediction" in your customer tool.
  • Internal AI agents, chatbots, writing assistants connected to your data.

For each system, note: name, vendor, use, data processed, who uses it.

2. Risk-level qualification (1 to 2 days)

For each one, determine the risk level using the AI Act grid. The 8 areas explicitly classified as high-risk (Annex III of the regulation):

  1. Biometrics.
  2. Critical infrastructure.
  3. Education and training.
  4. Employment, HR, worker management (CV screening, evaluation, dismissal).
  5. Access to essential private or public services (credit, insurance, social assistance).
  6. Law enforcement.
  7. Migration, asylum, border control.
  8. Administration of justice and democratic processes.

If none of your systems touch these 8 areas, you're probably in limited or minimal risk — light obligations (mainly transparency).

3. Documentation for high-risk systems (5 to 15 days)

If you have at least one high-risk system, you must produce and keep up to date:

  • A technical description (operation, training data, known limits).
  • A risk management file (identified risks, mitigation measures).
  • A human oversight procedure (who validates, when, how to override the AI).
  • A log journal (traceability of decisions made by the system).
  • A conformity assessment before placing on the market or deployment.

Good news: if the system is provided by a vendor, much of the documentation comes from them. Your job is to retrieve it, complete it with your terms of use, and keep it up to date.

4. Setting up human oversight

Key obligation: for any high-risk system, a human person must be able to understand, monitor and override AI decisions.

Concretely, this means:

  • A staff member trained on the system (who knows what it does well and badly).
  • A reporting procedure if the system drifts.
  • A mechanism to cancel an AI decision after the fact (refund, recourse, manual validation).
  • A consultable log of what the AI decided.

5. Informing the persons concerned

For any AI system that interacts with a person (client, candidate, employee), you must inform them. Examples:

  • Chatbot: "You are chatting with an automated assistant."
  • CV screening: "Your application is pre-filtered by an automated system. You can request a human review."
  • Credit scoring: "This decision is partly automated. You can request human intervention and contest the decision."

This information must be clear, visible, before the interaction — not hidden in the terms of service.

Budget orders of magnitude

SituationCompliance cost
Minimal risk only (95% of SMBs)€0 - €500 (inventory check)
Limited risk (chatbot, content generation)€500 - €2,000 (transparency, notices)
High risk, 1 system€3,000 - €8,000 (documentation, oversight, audit)
High risk, several systems€8,000 - €20,000

These are the DGE (Direction Générale des Entreprises — France's General Directorate of Enterprises) figures for a typical SMB. The order of magnitude is consistent with what I see on engagements.

What does NOT automatically trigger a high-risk obligation

Two frequent confusions:

1. "I use an LLM so I'm high-risk." Wrong. The LLM as a tool is in limited risk (you inform the user). You become high-risk only if your use of the LLM falls into one of the 8 Annex III areas.

2. "My vendor handles it." Half wrong. The vendor has its obligations (documentation, transparency). But as a deployer, you have your own obligations that cannot be delegated: human oversight, informing the persons concerned, logging your uses.

Decision grid — 3 questions

Question 1 — Do my AI systems touch one of the 8 high-risk areas?

  • No: limited or minimal risk. Light compliance (transparency, inventory). Budget €0 to €2k.
  • Yes, on 1 system: targeted high risk. Documentation, human oversight, procedure. Budget €3k to €8k.
  • Yes, on several: extended high risk. Call specialised counsel. Budget €8k to €20k.

Question 2 — Do I have an up-to-date inventory of my AI tools?

  • Yes, updated in the last 6 months: you're ahead, move to qualification.
  • No: start with that. 2 to 5 days of work, it's the foundation of everything.

Question 3 — Have I informed the persons concerned?

  • Yes: check that it's clear, visible, before the interaction.
  • No: add it immediately, regardless of the risk level. This is the only obligation that applies to all AI systems that interact with a person.

Where to start

If you want to take stock before August 2026, 30 minutes are enough for a first inventory. I look at your tools, tell you which systems are potentially high-risk, and honestly cost out what needs to be done. For purely sharp-edge legal topics, I redirect you to a DPO or specialised firm — I don't play lawyer.

If you want to dig into security upstream (provider choice, sovereignty, data), the AI security checklist covers the technical choices I apply on every engagement.


To go further