AI in 2026 is powerful, and it's dangerous if you go about it the wrong way. Not Hollywood-dangerous — dangerous in the sense of "your client file at OpenAI", "your contracts in a log lying around", "your competitor reading your prompts".

Here's the concrete checklist I run through on every AI engagement. It's neither exhaustive nor blocking — but it filters out 95% of real risks.

1. Where does the data really live?

This is question number one, and the foggiest. In order of sensitivity:

ProviderWhere data transitsDefault retention
OpenAI (API)USA30 days for abuse monitoring (can be disabled in Enterprise)
Anthropic (API)USA / EU30 days (can be disabled)
Claude via AWS BedrockRegion of choice (e.g. eu-west-3 Paris)Zero retention
Mistral (API)EU (France)Variable, contractual
Self-hosted open source modelYour serversYou decide

Rule I apply: for sensitive data (HR, health, finance, legal), I don't use public APIs without contractual guarantees. I go through Bedrock in an EU region, or through a self-hosted model.

2. The "leaking prompts" problem

When you send a prompt to an LLM, you send all the context: the client's email, the contract, the figures you want to analyse. It's obvious, and yet I regularly see SMBs unknowingly sending:

  • Personally identifiable data to a non-EU model with no legal basis.
  • Customer conversation logs with no anonymisation.
  • Signed contracts, with names and amounts, into mainstream no-code tools.

The three things I systematically check:

  1. What exactly leaves in the prompt?
  2. Does the provider log prompts (and where)?
  3. Can we anonymise before sending (masking names, IBAN, numbers)?

On 70% of projects, I run the prompt through a local masking step before sending to the LLM. Not complicated, not expensive, it changes everything.

3. GDPR compliance applied to AI

In 2026 the framework is much clearer than in 2023. The hard points, in order:

  • Legal basis: why are you allowed to process this data? Legitimate interest for productivity, or explicit consent.
  • Non-EU transfer: if you use OpenAI or Anthropic directly, you export data to the USA. Standard contractual clauses are mandatory, and clear information to the persons concerned.
  • Automated decision (Article 22): if AI makes a decision affecting a person (hiring, credit, dismissal, customer scoring), the person must be able to request a human review. Always.
  • Right of access and erasure: if a customer requests their data, you must be able to extract it. Including data ingested by your AI.

Concretely, I systematically refuse AI projects that:

  • Score employees without human oversight.
  • Decide on case acceptance with no possibility of appeal.
  • Process health or HR data without a dedicated environment.

4. Permissions: give the minimum, never more

An AI agent with access to your entire database is a security risk. An agent with access only to the 3 tables it needs, read-only, with access logs, is a controlled tool.

What I configure on every deployment:

  • Dedicated technical accounts (never a human's credentials).
  • Restricted scopes: read-only by default, write only on justified tables.
  • Token rotation every 90 days, automatic.
  • Immutable logs: who requested what, when, and with what result.

You must be able to answer the question "who accessed this client file last week?" in 30 seconds — including if the answer is "an automated AI agent".

5. What I do on the code side

A few non-negotiable basic rules:

  • No hardcoded secrets — API keys in a vault (HashiCorp, AWS Secrets Manager, or failing that non-versioned environment variables).
  • No unprotected prompt injection — when a user can influence a prompt, we filter, encapsulate, never trust.
  • Cost observability — a poorly scoped AI agent can cost €500/day instead of €5 if an attacker loops it. I always set a monthly cap.
  • Rollback plan — any write-action automation must be disable-able in 10 seconds, ideally via a single feature flag.

6. The simple security audit I deliver on every engagement

No 50-page report. A one-page document that says:

  • What the AI processes, and what it doesn't.
  • Where data lives (provider + region + retention).
  • What permissions were granted, on what, and for how long.
  • Who's alerted in case of anomaly.
  • The procedure if a leak is suspected.

One page. Readable by your DPO or lawyer in 5 minutes. More useful than 50 pages no one reads.

What you must check before signing with an AI service provider

Four questions I urge you to ask systematically — including to me:

  1. Which LLM provider, in which region?
  2. Are prompts logged? Where? For how long?
  3. Can the tool be disabled in 10 seconds if needed?
  4. Who owns the data and the code at the end of the engagement?

If the provider hesitates on a single one of the four, change provider.

Where to start

If you're already deploying AI and this checklist makes you uneasy, we can do a quick audit — one hour is enough to identify the 2-3 hard points and tell you what's worth fixing. No commitment, and no selling of services that don't make sense.


To go further