
As an insurance broker you stand for your client. You are not an agent of a single insurer — you work in the interest of the person across the desk. That makes your position distinctive, and your data processing risk too. You hold a combination of information that few other professions accumulate together: a financial profile, claims history, and — for life and income protection insurance — health information.
That last category has its own legal status. It is Article 9 GDPR: special category personal data in the strictest sense. And it is exactly that data that ends up outside your office the moment you use a cloud AI platform — often without a valid lawful basis, and often without realising it.
The data you process every day
Your client file is richer than it first appears. Name, address, financial profile — that is the layer everyone sees. But for life and income protection insurance it goes deeper: medical history, conditions, medication, sometimes disability or incapacity for work status. All of that falls under Article 9 of the GDPR — the category of special category personal data for which the strictest obligations apply.
Processing Article 9 data is in principle prohibited. Exceptions exist, but they are specifically defined. For brokers, the most commonly relied-upon basis is explicit consent of the data subject: Article 9(2)(a). But that consent is not broad. It covers the processing activity described on the application form — typically submitting a quote request to an insurer. It does not cover loading those same answers into an external AI analytics platform. That use requires a separate lawful basis and a separate transparency obligation.
The argument that the insurer carries the responsibility does not hold. You are the controller for the health data you collect from your client. The insurer is a separate controller for what they do with it after your transfer. Those two processing activities are legally distinct — your responsibility does not end on transfer.

What goes wrong with cloud AI
Consider this: you use a cloud AI platform for client advisory support and risk analysis. The interface is smooth, the outputs are useful, and the provider has a GDPR notice on their website. But what is actually happening?
A client completes a medical questionnaire for a life insurance quote. You load those answers — together with their financial profile and claims history — into the platform. The Article 9 data leaves your office’s secure environment and is processed on the cloud provider’s servers, potentially outside the EU. The platform may train its risk models on aggregated client data from all connected brokers. Your client’s health information contributes to a model shared with competitors.
When that client asks how their health data was used, you cannot give an answer that covers the cloud provider’s full processing chain. When the GBA or the AP opens an investigation following a complaint, the data processing agreement for the AI platform does not explicitly cover Article 9 processing. That is a violation.
GDPR certification of the AI tool does not resolve this. Certification addresses security measures for data storage. It does not confirm that you have a valid lawful basis for submitting Article 9 data to that tool. You must demonstrate that — the tool cannot do it for you.

The architecture that works
“You process health information. That is Article 9 GDPR — the most restricted category. You cannot send that data to a cloud AI platform without an explicit lawful basis and a watertight data processing agreement. And almost no one gets that right.”
The answer is not to avoid AI. It is to choose where the AI runs. A locally deployed AI processes the complete client file — financial profile, claims history, and health information — on your own system. You remain the sole controller. Processing purposes are controllable and demonstrable. No Article 9 data leaves your office.
This architecture also supports your IDD duty of care. The Insurance Distribution Directive requires an advisory process that is demonstrably in the client’s best interest. AI that informs the advice but is not auditable does not satisfy that obligation. Local AI generates a documentable advisory process: every step traceable, every decision recorded — something you can show to the FSMA or the AFM.
That makes local AI more than a privacy choice. It is an argument for the quality and demonstrability of your advice. For brokers who have spent years proving they work carefully and independently, that fits naturally into the way they operate.
Three steps to start with
Step 1 — Map your Article 9 data. Which files contain health information? Life, hospitalisation, and income protection policies are the most obvious categories. Those files need separate treatment in your records of processing activities — with an explicit lawful basis for each processing purpose.
Step 2 — Review your AI tools. Which tools process client data? Is there a data processing agreement in place? Does it explicitly cover the processing of special category personal data? If the answer is not immediately clear, do not send Article 9 data to that tool for now — until you have resolved it.
Step 3 — Document your advisory process. The IDD requires transparency and a demonstrable duty of care. Make sure the AI tools you use in the advisory process generate an audit trail you can show to the supervisor. A local system that automatically builds the advisory record provides the solid foundation for that.

As a registered broker you have already shown that trust and diligence matter to you. Your clients share their most sensitive information with you — including information about their health. The architecture of your AI tools must support that relationship of trust, not undermine it.
Local AI is not a compromise. It is the only solution that supports both GDPR Article 9 compliance and the IDD duty of care at the same time — without forcing you to choose between efficiency and compliance.