
Legal professional privilege is the most absolute protection in the legal system. It is not merely a deontological obligation — it is a fundamental right that underpins the right to a fair defence. Without the certainty that what a client tells their lawyer remains confidential, no one can obtain adequate legal representation.
AI tools offer lawyers substantial capabilities: contract analysis, legal research, summarising case files, generating standard documents. But anyone using cloud AI for that work is placing client information on external servers — and that is fundamentally incompatible with the core of the profession.
The absoluteness of legal privilege
In Belgium, the lawyer’s professional secrecy is enshrined in Article 458 of the Criminal Code, the Bar Act, and the code of conduct of the Order of Flemish Bars (OVB). In the Netherlands, Article 11a of the Advocates Act protects professional secrecy, and Article 218 of the Code of Criminal Procedure protects the lawyer’s right to refuse testimony before the state.
What makes legal professional privilege distinctive is its absoluteness. A doctor may in certain circumstances be released from their duty of confidentiality. A lawyer, in principle, cannot. The privilege protects not only the lawyer — it protects the client, the rule of law, and public trust in the justice system as a whole.
When client information reaches a cloud AI server, it leaves the absolute protective sphere of professional privilege. Legally, that is a serious risk. Practically, the damage is irreversible once the information has left.
Three scenarios that make the risks concrete

Scenario 1 — Mergers and acquisitions. A lawyer uploads a draft acquisition agreement to Claude or ChatGPT for contract analysis. The document contains the acquisition price, the company valuation, and the names of all parties involved. That information now sits on an external server. If it leaks — through a security incident, a government demand under the CLOUD Act, or training data — the consequences are catastrophic. Transactions collapse. Share prices move. Parties suffer losses that dwarf any legal fees.
Scenario 2 — Criminal defence. A lawyer processes their defence strategy through an AI tool: which witnesses have weak testimony, which pieces of evidence to challenge, which procedural arguments to prepare. If that information reaches the wrong hands in any form, the defence is compromised before it begins.
Scenario 3 — Family law. A lawyer processes correspondence in a divorce case through cloud AI. The correspondence contains their client’s financial vulnerabilities, psychological background, and information about the children. If the opposing party reaches that information, the consequences for the client are personal and irreversible.
“Legal privilege does not just protect the information — it protects the relationship. And that relationship only exists if the client knows that nothing leaves their lawyer.”
GDPR and the AI Act: additional layers
Beyond professional privilege, additional legal frameworks apply. Lawyers regularly process criminal data — information about convictions, charges, and suspicions. The GDPR protects this separately under Article 10 as a distinct sensitive category alongside the special categories of Article 9.
The AI Act classifies AI applications in the administration of justice as high-risk (Annex III, point 8). Contract analysis and legal research supporting the lawyer without autonomous decision-making likely falls outside that classification. But any AI application contributing to a legal decision affecting a person’s rights requires particular care — and human oversight as a legal obligation.
The solution: AI in the office, not outside it
Local AI resolves the fundamental problem. Contract analysis, legal research, summarising case files, generating standard documents — all of this can be done by a local model running on hardware in the office. Client information never leaves the firm’s servers.
For contract analysis and research, current open-weight models — Llama 3, Mistral, Qwen — are already capable enough for supporting work. They do not replace the lawyer’s legal judgement. They take over the repetitive and time-consuming tasks. And they do so without moving client information outside the office.
Three steps you can take now
1. Establish an internal AI policy
Which categories of information may staff process with which tools? A clear internal policy is the first line of defence — not only against legal risk, but against well-intentioned colleagues using cloud AI without understanding the implications.
2. Identify the repetitive tasks
Which tasks cost your firm the most time and carry the least client-specific risk? Standard documents, templates, neutral research: these are the first candidates for AI support. Start there, and expand once the local architecture is in place.
3. Invest in local infrastructure
An AI server for a law firm is not a major project. A capable mini-PC running Ollama with a strong open-weight model handles most daily support work. One-time investment. No subscription. Full control. And a clear answer to any client question about how you protect their data.