The EU AI Act's high-risk classification carries the heaviest compliance burden. Many Irish organisations are operating high-risk AI systems without realising it. Here is how to identify them and what the obligations are.
The EU AI Act creates four risk tiers for AI systems: prohibited, high-risk, limited risk, and minimal risk. The high-risk tier carries the most substantial compliance obligations — and it is the category that most organisations in regulated sectors need to focus on.
The important point, and the one that surprises many boards, is that high-risk does not mean experimental or advanced. It refers to AI systems used in specific contexts that the Act's drafters identified as carrying significant potential for harm to individuals' rights, safety, or access to essential services. Many organisations are already operating such systems.
What the Act classifies as high-risk
The Act identifies high-risk AI systems through two mechanisms: Annex I, which lists AI systems used in safety-critical products already governed by EU harmonisation legislation, and Annex III, which lists eight standalone application areas where high-risk classification applies regardless of the product category.
For most Irish organisations, Annex III is the relevant one. The eight areas are:
Biometric identification and categorisation — AI systems used to identify or categorise individuals based on biometric data. This includes some employee access systems and customer identity verification tools.
Critical infrastructure management — AI systems that manage or operate utilities, transport, or other critical infrastructure. Most Irish organisations are not affected here, though energy and utilities companies should review their systems.
Education and vocational training — AI systems that determine access to educational opportunities, evaluate performance, or influence progression decisions. AI-assisted assessment tools fall into this category.
Employment and workers management — This is the area most likely to catch Irish organisations by surprise. AI systems used in recruitment, CV screening, performance evaluation, promotion decisions, task assignment, or monitoring of employment contracts are classified as high-risk.
Access to essential private and public services — AI systems used in creditworthiness assessment, life and health insurance underwriting, and access to public services. Financial services firms with AI-assisted lending or credit scoring tools need to assess these carefully.
Law enforcement — Not directly relevant to most Irish commercial organisations.
Migration, asylum, and border control — Not directly relevant to most Irish commercial organisations.
Administration of justice and democratic processes — AI systems that assist judicial or administrative decision-making.
The compliance obligations that attach to high-risk systems
If an AI system is classified as high-risk, the organisation deploying it — the operator in the Act's terminology — faces a set of substantive obligations.
Risk management system. Operators must establish, implement, and maintain a risk management system throughout the AI system's lifecycle. This is a continuous obligation, not a one-time assessment.
Data governance. Training, validation, and test data must meet defined quality criteria. For purchased tools, this obligation largely falls on the provider — but operators must obtain sufficient transparency from providers to confirm that obligations are being met.
Technical documentation. The system must be documented to a defined standard. Again, purchased tools place this obligation primarily on the provider, but operators need to obtain and retain that documentation.
Human oversight. High-risk AI systems must be designed and operated to allow effective human oversight. Operators cannot deploy a high-risk system in fully automated mode without justifying why human oversight is unnecessary — and in most cases, that justification will not be available.
Accuracy, robustness, and cybersecurity. Operators must ensure that the systems they deploy meet defined performance standards and are resilient to interference.
Logging. Systems must log their operation to an extent that allows post-hoc investigation of outputs. Operators need to verify that purchased tools meet this requirement.
The operator versus provider distinction
A common confusion in readiness discussions is the boundary between operator obligations and provider obligations. The Act distinguishes between providers (those who develop or place an AI system on the market) and operators (those who deploy a system developed by someone else in their business context).
For most Irish organisations purchasing commercial AI tools, the provider is the software vendor. But operator obligations do not disappear because the tool was purchased. Operators must:
- Ensure they use the system in accordance with the provider's instructions
- Monitor performance in their specific operational context
- Report serious incidents to the relevant market surveillance authority
- Maintain an appropriate human oversight function
The practical implication is that procuring a commercially available AI tool does not mean procuring compliance. The governance obligations remain with the deploying organisation.
Starting with the inventory
None of this work can begin without a complete AI inventory. The first task for any Irish organisation that has not yet started EU AI Act readiness work is to identify every AI system in use, map it to an application area in Annex III, and assess whether high-risk classification applies.
This is not a comfortable exercise. Most organisations find systems they did not know were in scope. But it is the necessary foundation for everything that follows.
Acuity AI Advisory provides EU AI Act readiness assessments that begin with inventory and risk classification. If your organisation has not yet completed this step, a diagnostic conversation is the right starting point.