
HR professionals and recruitment agencies process the most personal information people ever share: their career history, their motivations, their salary expectations, their assessment results. And they do so in a context of explicit decision-making — about who progresses and who does not. AI tools have transformed that process. CV screening, candidate matching, video interview analysis, automated chatbots — all widely deployed. But what few users realise is that these tools have been explicitly named by European law as high-risk artificial intelligence. With all the consequences that follow.
Annex III Point 4 — No Grey Area
The AI Act (Regulation (EU) 2024/1689) is unambiguous. Annex III point 4 names AI systems used in recruitment and personnel selection directly as high-risk. This was a deliberate policy choice: the weight of employment decisions — who gets to work, who does not — demands the highest standards of protection.
What falls within scope? CV screening tools, matching algorithms, assessment analysis, AI-driven video interview analysis, chatbots that communicate selection decisions — any system that evaluates, compares, or ranks candidates falls into this category. The obligations are correspondingly demanding: technical documentation, decision logging, transparency toward candidates, human oversight with override capability, and a conformity assessment.
On top of that, GDPR Article 22 gives individuals the right not to be subject to solely automated decision-making with significant consequences. A rejection delivered by an algorithm, without human involvement or an explicable rationale, is a violation of that right. If a candidate requests an explanation and you cannot provide one, you risk enforcement action by the data protection authority.
Then there is anti-discrimination law. In Belgium, the Act of 10 May 2007 prohibits discrimination on grounds of sex, origin, age, sexual orientation, and other protected characteristics. In the Netherlands, the General Equal Treatment Act (AWGB) covers the same ground. AI models trained on historical hiring data structurally reproduce the biases embedded in that data — legally just as problematic as deliberate discrimination.
A Scenario Already Playing Out
A recruitment agency migrates to a modern cloud ATS with built-in AI screening, video interview analysis, and candidate matching. The platform is well-known, the interface is clean, the vendor guarantees GDPR compliance. What goes wrong?
First: thousands of CVs per year — names, addresses, employment histories, qualifications — are uploaded to servers outside the EU. Second: the video interview analysis evaluates tone of voice, facial expressions, and word choice. That is high-risk AI under Annex III. The required technical documentation is not provided by the platform as standard. Third: the platform trains its matching models on aggregated data from all connected agencies — including your candidate database and historical assessment decisions.
Then a candidate requests access to the reasoning behind their rejection. GDPR Article 22. You have no explanation you can give. The data protection authority investigates a complaint. You — as the deployer — are responsible for the AI Act compliance of the system. Not the vendor.
“If you use a SaaS tool that screens CVs or ranks candidates, you as the deployer are responsible for that system’s compliance. Not the vendor. You.”
This is not a hypothetical risk. It is the structure of the law. And it applies to every agency, every internal HR team, every talent acquisition manager deploying an AI screening tool today — even when that tool is taken as a managed SaaS service. The deployer carries the responsibility. “The vendor assured us” is not a legal defence.

The Discrimination Risk Is Not Hypothetical
AI models are as neutral as the data on which they were trained. If a sector spent ten years predominantly hiring men, a model trained on those decisions learns that men are stronger candidates. Certain word choices correlated with a particular background get weighted into the score. That is structural bias in algorithmic form.
Federgon, the Belgian federation of HR service providers, publishes sector guidelines for recruitment and data management. In the Netherlands, ABU and NBBU publish codes of conduct for handling candidate data. Unia — Belgium’s Interfederal Centre for Equal Opportunities — handles discrimination complaints and has specific focus on algorithmic discrimination. The College voor de Rechten van de Mens in the Netherlands does the same.
An AI tool that has not been actively evaluated for bias is a liability. Not only toward the regulator, but toward every candidate unjustly excluded by an algorithm that no one in your organisation can explain.

Local AI: The Recruiter Decides, the Data Stays In-House
There is a way forward that is both practical and legally coherent. Local AI — a language model running on your own server or workstation, without data reaching an external platform — lets you deploy AI for the administrative weight of recruitment, without the legal exposure of SaaS AI screening.
In concrete terms: AI that reads incoming CVs, structures them, and produces a standardised summary for the recruiter. Not to automatically exclude candidates, but to accelerate the initial screening phase. The recruiter decides. AI supports. Candidate data does not leave the company environment.
The same logic applies to interview records. Selection interviews contain direct quotes, emotional impressions, and personal information shared by candidates in confidence. That data does not belong on a cloud AI platform. Local AI transcribes and structures — on your infrastructure, under your control. And for onboarding: personalised welcome letters and introductory materials generated from contract information that never leaves your own environment.
The result: full control over documentation and transparency, the ability to respond to GDPR Article 22 requests, and no dependence on a cloud provider’s AI Act compliance posture.

Three Concrete Steps
Step 1: Map your current AI tools. Which tools in your recruitment process evaluate, compare, or rank candidates? CV screening software, matching algorithms, video interview analysis — if the answer is yes, it falls under Annex III of the AI Act. Verify whether you, as deployer, hold the required technical documentation.
Step 2: Review your data processing agreements. Every ATS, AI screening tool, assessment platform, or HR cloud service that processes candidate data on your organisation’s behalf requires a data processing agreement. Check that the agreement exists, is current, and addresses AI Act obligations. A valid privacy policy is not the same as AI Act compliance.
Step 3: Consider local AI for candidate data. For CV processing, interview summarisation, and onboarding documents, local AI is a direct solution: AI support for your recruiters without personal data reaching an external server, and with full control over transparency and documentation. The decision stays human. So does the compliance.
The AI Act is not a future concern. Annex III point 4 is in force. If you are deploying AI in your recruitment process today — as a deployer, not as a vendor — the responsibility is yours. That is precisely why the choice of local AI in HR is not a technical preference. It is a legal one.