EU AI Act FAQ
How do you classify an AI system under the EU AI Act?
Quick answer
AI system classification under the EU AI Act follows a four-step process: first, determine whether the practice is prohibited. Second, check whether the system falls into a high-risk category (Annex III lists 8 categories). Third, check whether the system has limited-risk transparency obligations (chatbots, deepfakes, emotion recognition). Fourth, if none of the above applies, the system is minimal-risk with no specific obligations. The classification depends on the system's purpose and the context in which it is deployed — not just its technical design.
The four-tier classification process step by step
Step one: check for prohibited practices. Does the system fall within any of the Article 5 prohibitions — social scoring, subliminal manipulation, real-time biometric surveillance in public spaces, emotion inference at work, or predictive policing based solely on profiling? If yes, the system cannot be used. If no, move to step two. Step two: check for high-risk classification. Does the system fall within any of the eight Annex III high-risk categories — biometric identification, critical infrastructure, education, employment, essential services, law enforcement, migration, or justice? If yes, the full high-risk regime applies. If no, move to step three. Step three: check for limited-risk transparency obligations. Is the system a chatbot, a deepfake generator, an emotion recognition system, or a system that generates synthetic content? If yes, specific transparency and labelling obligations apply. If no, move to step four. Step four: the system is minimal-risk with no specific EU AI Act obligations, though voluntary codes of practice may apply.
Common misclassifications and why purpose matters
The most important principle in EU AI Act classification is that it is purpose-driven and context-dependent, not technically determined. A general-purpose AI tool like a large language model is not inherently high-risk — but when deployed in a recruitment context to screen CVs, it becomes high-risk. When deployed for general administrative drafting, it is likely minimal-risk. The same underlying technology, different classification depending on use. Common misclassifications arise from organisations assessing the technical nature of their AI tools rather than their operational deployment. An AI-powered HR analytics platform that the vendor markets as a decision-support tool is high-risk if it is used to make or influence employment decisions. An AI chatbot used for customer service is limited-risk, not minimal-risk, because of its transparency obligations. Getting classification right requires both technical understanding of the system and clear-eyed assessment of how it is actually used in the organisation — which is why an AI inventory exercise is essential before any classification work.
Acuity AI helps Irish organisations classify their AI systems accurately under the EU AI Act framework. See our EU AI Act compliance services.