EU AI Act FAQ
What is a high-risk AI system under the EU AI Act?
Quick answer
High-risk AI systems are those used in specific sectors and use cases that pose significant risk to people's rights, safety, or livelihoods. Under the EU AI Act, high-risk categories include: AI in employment and recruitment (CV screening, performance assessment, promotion decisions), AI in credit and insurance decisions, AI in healthcare (clinical decision support, medical devices), AI in education, AI in law enforcement, AI in critical infrastructure, and AI in the administration of justice. High-risk status triggers mandatory requirements: conformity assessment, technical documentation, human oversight, and transparency obligations.
The full list of high-risk AI categories
The EU AI Act's Annex III lists eight categories of high-risk AI systems. Biometric identification and categorisation systems. AI in critical infrastructure (energy, water, transport). AI in education and vocational training — systems that determine access to educational institutions or assess students. AI used in employment, workers management, and access to self-employment — CV screening, performance monitoring, promotion and termination decisions. AI in access to essential private and public services and benefits — including credit scoring, insurance risk assessment, and emergency services dispatch. AI in law enforcement. AI in migration, asylum, and border control. AI in the administration of justice. For Irish organisations, the categories most commonly relevant are employment AI and financial services AI — credit scoring, insurance underwriting, and AML/KYC systems that incorporate AI components can all fall within the high-risk categories depending on their design and use.
What high-risk status requires of deployers
When an Irish organisation deploys a high-risk AI system, it takes on a specific set of obligations as a deployer under the EU AI Act. It must verify that the provider has completed the required conformity assessment and that the CE marking is in place. It must implement the human oversight measures that the provider specifies in the system's instructions for use. It must assign those measures to specific, trained, authorised individuals. It must monitor the system's operation in accordance with the provider's instructions. It must keep logs of the system's operation to the extent it can. It must report serious incidents or malfunctions to its market surveillance authority. And it must ensure that the individuals operating the system have received adequate training — fulfilling the Article 4 AI literacy obligation for this specific use case. High-risk status is not just a provider obligation — deployers carry significant responsibilities of their own.
Acuity AI helps Irish organisations identify and manage high-risk AI obligations under the EU AI Act. See our EU AI Act compliance services.