← Insights
·4 min read

15 Competent Authorities: How Ireland's Distributed AI Enforcement Will Work

G

Ger Perdisatt

Founder, Acuity AI Advisory

Ireland has designated 15 competent authorities to enforce the EU AI Act across sectors. Your existing regulator will oversee your AI compliance. Here is what that means for financial services, healthcare, legal, and public sector organisations.

Ireland's approach to EU AI Act enforcement is now clear. Rather than creating a single AI regulator, the Government has designated 15 National Competent Authorities under a distributed enforcement model. Each established regulator will supervise AI compliance within their existing sector — with the AI Office of Ireland providing coordination.

For Irish organisations, this has a specific practical implication: the regulator you already know will be the one asking about your AI governance.

How the distributed model works

The model works on a straightforward principle: the regulator that already understands your sector is best placed to assess how AI risk manifests within it.

The Central Bank of Ireland will supervise AI in financial services. Credit scoring, fraud detection, AML systems, and customer-facing AI tools in banking and insurance fall under its remit.

The Data Protection Commission continues to protect fundamental rights related to personal data — which means AI systems that process personal data fall within the DPC's oversight, alongside their AI Act classification.

Coimisiún na Meán oversees AI in audiovisual media services. Other sector regulators cover healthcare, employment, education, and additional domains.

The AI Office of Ireland coordinates across all 15 authorities, handles cross-sectoral issues, and manages the centralised functions the Act requires — including the EU AI database and general-purpose AI model oversight.

What this means for your organisation

Financial services

If you are regulated by the Central Bank, AI compliance becomes part of your regulatory engagement. The Central Bank already has expertise in systemic risk, conduct risk, and consumer protection — AI governance maps naturally onto these existing frameworks. Firms using AI in credit scoring, fraud detection, or AML should expect AI governance to feature in supervisory interactions.

Legal sector

Law firms using AI for document review, due diligence, or client-facing services face a dual accountability: professional obligations under the Law Society and regulatory obligations under the designated competent authority. The governance framework needs to address both.

Public sector and state bodies

State bodies deploying AI in public administration, social services, or justice-adjacent functions are deploying systems the EU AI Act explicitly classifies as high-risk. The competent authority for public sector AI will have specific powers and expectations.

Healthcare and life sciences

AI systems used in clinical settings, diagnostic support, or patient triage carry some of the highest-risk classifications under the Act. The existing regulatory relationship with health authorities extends to AI governance.

The practical implications

1. Start the conversation with your regulator now. If you are in a regulated sector, your regulator will be developing their approach to AI oversight. Early engagement is an opportunity to understand expectations and demonstrate good faith preparation.

2. Your AI governance framework must align with existing regulatory requirements. AI governance does not exist in a vacuum. For financial services firms, it must integrate with existing risk management and conduct frameworks. For healthcare organisations, it must align with clinical governance. The framework needs to serve both the AI Act and your sector's existing regulatory architecture.

3. Cross-regulatory overlap is real. Many AI systems will fall under the oversight of multiple authorities. An AI system that processes personal data in a financial services context engages both the DPC and the Central Bank. Your governance framework must be designed for this overlap — with clear accountability lines and documentation that serves multiple regulatory audiences.

4. Documentation requirements are now testable. Competent authorities will have powers to access documentation. The AI inventory, risk classifications, human oversight mechanisms, and governance structures you have built are not just internal management tools — they are regulatory artefacts that must withstand external scrutiny.

The enforcement timeline

The enforcement timeline for these authorities runs in parallel with the broader EU AI Act implementation:

  • Now: Prohibited AI practices are already banned
  • August 2025: GPAI obligations apply
  • August 2026: AI Office and competent authorities become operational; enforcement powers apply
  • December 2027: High-risk system technical obligations apply (subject to Digital Omnibus adoption)

The gap between now and August 2026 is the preparation window. Organisations that use it to build governance foundations — and begin their regulatory dialogue — will be better positioned than those that wait.


If you need to understand which competent authority applies to your organisation and what governance structures you need in place, contact Acuity AI Advisory for a structured assessment.

eu ai act