
The AI Act is the world’s first binding AI law. It applies to everyone who uses, sells or operates AI in the EU — even if your provider is based outside Europe. The obligations for high-risk applications are expected to take full effect on 2 December 2027.
Most business owners think it doesn’t apply to them. That is incorrect.
What the AI Act Actually Says
The law defines an “AI system” broadly: any system that, based on input, via machine learning or similar techniques, generates output — predictions, recommendations, decisions or generated content — that influences the environment.
Your tool that screens CVs: an AI system. Your chatbot that directs customers: an AI system. The software your HR department uses to rank candidates: an AI system. The term “AI” under this law is not limited to ChatGPT. It covers everything that learns patterns and supports decisions on that basis.
The Risk Model
The AI Act works with four risk levels. The higher the risk, the heavier the requirements:
- Unacceptable risk — prohibited: Social scoring by governments, manipulative techniques targeting vulnerable groups, biometric mass surveillance in public spaces, real-time facial recognition by police (with narrow exceptions).
- High risk — heavy obligations: The category that affects most businesses. See below.
- Limited risk — transparency: Chatbots must identify themselves as AI. AI-generated content must be labelled.
- Minimal risk — no specific requirements: Spam filters, AI in video games, playlist recommendation systems.
What Falls Under High Risk?
This is the category most business owners incorrectly believe they avoid. Annex III of the law contains the full list. The most relevant categories for businesses:

- Recruitment and selection: Screening CVs, ranking candidates, analysing job interviews, determining promotions
- Credit and finance: Assessment of creditworthiness or credit scores
- Medical devices: AI as a safety component in medical equipment (class IIa and above)
- Education: Determining access to education or training, assessing students
- Critical infrastructure: AI in energy, water, transport management
- Law enforcement and security: Recidivism risk assessment, emotion detection, criminal profiling
If your business uses even one of these applications — as an end user or as a software provider — you are a “deployer” or provider under the law. And you have obligations.
What Those Obligations Mean in Practice
For high-risk systems, a mandatory framework applies to both the builder and the user of the system:
- Conformity assessment before the system is placed on the market or put into use
- Technical documentation — description of the system, training data, test results
- Automatic logging — the system must maintain log files of its operation
- Human oversight — a human must be able to review, challenge and override AI decisions
- Transparency toward users — anyone affected by a decision must know AI was involved
- Registration in EU database for providers of high-risk systems
You are not delivering this to someone else. You must be able to present it yourself — at inspection, at incidents, and if a customer or employee goes to court.
The Timeline
The AI Act entered into force on 1 August 2024. Obligations are being phased in:
- August 2024: Prohibitions for unacceptable risk
- February 2025: Obligations for general-purpose AI models (GPAI) such as GPT-4 or Claude
- August 2026: Original high-risk deadline (pending: plenary vote 26 March)
- 2 December 2027: Expected new deadline for high-risk systems (Annex III)
- 2 August 2028: High-risk products under sectoral safety legislation
The delay removes some immediate pressure — but not the obligation. A conformity assessment takes time. Building documentation takes time. Updating contracts with providers takes time. And it all begins with knowing which systems you use.
What If You Use Your Software Provider’s AI?
You are a “deployer”. The law distinguishes between “provider” (the builder of the AI system) and “deployer” (the organisation that uses it in a professional context). As a deployer you have your own obligations — even if you are simply licensing the software.
You must verify that your provider complies with documentation requirements. You must deploy the system according to its instructions. And you must ensure the legally required human oversight is in place.
“But my provider says it’s fine” is not sufficient. You are expected to be able to demonstrate compliance yourself.
Three Steps You Can Take Now
The full compliance path for high-risk AI is long. But there are three steps every business owner can take now, regardless of sector or company size:
1. Inventory
Which AI tools do you use? For which tasks? Which decisions do they support or make? Build a simple list. This is the foundation for everything that follows.
2. Ask
Contact your software providers. Ask the question: does this system qualify as high-risk under the AI Act? What documentation is available? The answer — or the absence of one — is already informative.
3. Document
Record what you use, when you started, for which application, and what information you have received from your provider. This is not only a requirement. It is your evidence if something goes wrong later.
You do not need to be fully compliant today. You do need to be able to show that you are actively and consciously working toward it.
In the next article we look at a concrete sector: what the AI Act means for those who use AI in recruitment and selection. Because the risks are abstract until you discover that your shortlisting software is legally a high-risk system.
The question is not exactly when the deadline falls. The question is whether you know what you are using.