Ireland has designated 15 National Competent Authorities to enforce the EU AI Act through a distributed sectoral model. The regulator that will inspect your AI systems is almost certainly one you already know. Here is a practical guide to who regulates what — and what that means for your compliance approach.
One of the most practically important aspects of how Ireland has chosen to implement the EU AI Act is also one of the least discussed: enforcement is distributed across existing sectoral regulators, not centralised in a single AI authority.
The Regulation of Artificial Intelligence Bill 2026 designates 15 National Competent Authorities (NCAs) to supervise compliance. This means the organisation that will assess your AI systems is not a new AI regulator you have never dealt with — it is the body that already regulates your sector. Understanding which authority covers your organisation, and what powers they have, is foundational to a credible compliance approach.
The central coordinating body: AI Office of Ireland
The AI Office of Ireland is the new statutory independent body established as Ireland's Single Point of Contact for EU AI Act matters. It must be operational by 1 August 2026.
The AI Office is not the primary enforcement authority for most organisations. Its role is coordination — maintaining oversight of the broader system, handling cross-sectoral matters, and dealing with AI use cases that do not fall under any specific sectoral regulator. For the majority of Irish businesses, their practical compliance relationship will be with their existing sectoral authority, not the AI Office directly.
Think of the AI Office as the entity that sets the rules of the game and handles disputes about which referee applies — the sectoral authorities are the referees.
Who regulates what: the key designations
Central Bank of Ireland oversees AI compliance for regulated financial services: credit institutions, insurance companies, investment firms, and regulated fund managers. In practical terms, this covers AI systems used in credit scoring, loan decisioning, insurance pricing and underwriting, AML and fraud detection, and any customer-facing AI in regulated financial products. If you are a financial services firm, the CBI is your AI Act regulator — the same relationship, the same supervisory approach, the same documentation expectations.
Health Products Regulatory Authority (HPRA) oversees AI systems that fall within medical device regulation. This includes clinical decision support tools, diagnostic AI, and AI systems embedded in medical equipment. Healthcare providers and life sciences companies operating in Ireland should already have a relationship with the HPRA; that relationship now extends to AI system compliance.
Data Protection Commission (DPC) retains oversight over AI systems where GDPR obligations are engaged — which in practice means the vast majority of AI systems that process personal data. The EU AI Act and GDPR create parallel obligations for most AI deployments; the DPC is well-positioned to assess both simultaneously. For many Irish organisations, a DPC inquiry about an AI system may effectively double as an EU AI Act compliance assessment.
Competition and Consumer Protection Commission (CCPC) has oversight of AI used in consumer-facing contexts in sectors not covered by a specialist regulator. This includes AI in retail, e-commerce, consumer lending (above the regulated threshold), and customer service automation.
Media Commission covers AI systems used by broadcast and online media services regulated under the Online Safety and Media Regulation Act.
Department of Justice oversees AI use in law enforcement, migration decisions, and related state functions. This is primarily relevant for public sector bodies, but organisations providing AI systems used in these contexts are also in scope.
Workplace-related AI — AI used in HR, recruitment, performance management, and employee monitoring — sits at the intersection of several authorities, including the Workplace Relations Commission and the DPC, depending on the specific use case and whether high-risk AI system definitions are triggered.
The 15 competent authority structure in context
The full designation of 15 NCAs reflects the EU AI Act's approach to enforcement. Annex III of the Act lists specific high-risk AI system categories, each of which maps to supervisory domains that existing regulators already cover. Ireland's decision to build enforcement through existing sectoral expertise is deliberately aligned with this structure.
The practical implication for organisations: your compliance documentation, governance structures, and incident reporting processes should be designed with your specific regulator in mind. The CBI's expectations for AI governance documentation will reflect its existing supervisory culture — detailed, risk-based, board-accountable. The HPRA's approach will reflect its medical device oversight experience — focused on technical documentation, clinical validation, and post-market surveillance.
Knowing which regulator you are dealing with tells you something useful about what good compliance looks like in your context.
What enforcement powers these authorities have
All designated Market Surveillance Authorities under the AI Act have the same core powers:
Documentation access. Authorities can require economic operators — both providers (who build AI systems) and deployers (who use them in practice) — to produce technical documentation, conformity assessments, risk management records, and human oversight logs. This applies on demand, without requiring a formal investigation to be opened.
Unannounced inspections. Authorities can conduct on-site inspections without prior notice. In practice, this mirrors existing regulatory inspection powers in sectors like financial services and health and safety.
Corrective action. Where non-compliance is found, authorities can require specific corrective measures, set deadlines for remediation, and revisit to verify compliance.
Market restrictions. In serious cases, authorities can restrict or prohibit the use of an AI system within Ireland. For a business-critical AI system, this is the consequence that concentrates the mind.
Penalties. Maximum penalties are €35 million or 7% of worldwide annual turnover for prohibited AI practices; €15 million or 3% of turnover for other non-compliance; €7.5 million or 1% for providing incorrect information to regulators.
What this means for your compliance approach
Three implications follow from the distributed enforcement model.
Your compliance documentation should be designed for your specific regulator. Generic EU AI Act documentation will satisfy a compliance checklist. Documentation designed around what your sector's regulator actually looks for in an inspection will provide genuine protection.
If you have an existing regulatory relationship, use it. Organisations that have historically engaged proactively with their sectoral regulator — maintaining open communication, responding promptly to requests, flagging emerging issues early — are better positioned than those with adversarial or purely reactive relationships. That dynamic will carry through to AI Act supervision.
Cross-regulator AI use is the most complex case. An AI system used in employee recruitment (potentially DPC-relevant) at a financial services firm (CBI-regulated) for onboarding compliance processes (possibly HPRA-adjacent) may implicate multiple authorities. The AI Office of Ireland exists partly to handle these cases — but knowing that the complexity exists before an inspection raises it is a material advantage.
Understanding your enforcement authority is one of the first things to establish in any EU AI Act readiness programme. At Acuity AI Advisory, we build compliance approaches around your specific regulatory context — not generic frameworks that look the same regardless of sector. If you want a clear picture of where you stand and who will be asking the questions, that is a good place to start.