The Central Bank of Ireland has not published a dedicated AI framework, but its expectations are becoming clear through supervisory engagement, existing conduct and risk frameworks, and the EU AI Act. Regulated firms need to read all three together.
The Central Bank of Ireland has not yet published a standalone AI governance framework. Some firms have interpreted that silence as latitude. It is not. The Central Bank's existing supervisory expectations — on operational resilience, model risk, conduct, and governance — apply to AI systems in full. The absence of AI-specific guidance does not mean the absence of regulatory expectations.
Understanding what the regulator expects requires reading three things together: the Central Bank's existing frameworks, the EU AI Act, and DORA.
What the Central Bank expects: the existing framework
The Central Bank's supervisory focus on AI has been visible in a number of channels. Its FinTech engagement, risk speeches from senior leaders, and published consumer protection expectations all point in the same direction: AI systems used in regulated financial services must be explainable, auditable, and subject to human oversight. Where AI is used in decisions that affect consumers — credit decisions, product recommendations, fraud flags — the firm must be able to explain those decisions and demonstrate that they are fair.
The Consumer Protection Code, updated in recent years, already creates obligations around automated decision-making that interact directly with AI deployment. If a customer is refused a product or service based partly on an AI output, the firm needs a process for explaining that outcome and providing a human review pathway.
The Central Bank's model risk guidance — developed in the context of credit models and stress testing — is also instructive. It requires validation, documentation, and governance of quantitative models. There is no principled reason why the regulator would apply different standards to AI models used in credit scoring, fraud detection, or AML.
DORA overlap
The Digital Operational Resilience Act applies to Irish financial entities from January 2025. Its ICT risk management requirements extend to AI systems — the technology is not exempt because it is novel. DORA requires firms to identify, classify, and manage ICT risks, including those arising from third-party technology providers. If your fraud detection or AML system is a third-party AI platform, the vendor relationship is a DORA-relevant third-party ICT dependency.
DORA and the EU AI Act create overlapping documentation requirements in some areas. Mapping those overlaps before building separate compliance processes is more efficient than discovering the duplication later.
High-risk classification under the EU AI Act
Several AI applications common in Irish financial services fall into the EU AI Act's high-risk category under Annex III. These include AI used in creditworthiness assessment, AI used in life insurance underwriting, AI used in fraud detection where the output directly determines access to services, and AI used in AML monitoring where the output triggers adverse action against a customer.
High-risk classification means the provider (or the firm acting as deployer in a customised deployment) must meet requirements for data governance, transparency, human oversight, accuracy, and robustness. For firms that have bought third-party AI platforms for these functions, the compliance obligation does not disappear — it shifts. The deployer must verify that the system they are using meets the requirements, maintain appropriate use documentation, and ensure human oversight is genuinely in place rather than nominal.
What robust governance means in practice
Regulatory expectations in this area consolidate around a few consistent themes. First, accountability: there must be a named individual responsible for AI governance, with a clear mandate and reporting line. This is not optional in a regulated firm. Second, model risk: AI models used in consequential decisions require validation, version control, and regular performance monitoring. Third, human oversight: override processes must exist and must actually be used — firms where the AI override rate is zero are not demonstrating human oversight, they are demonstrating a rubber stamp. Fourth, audit trail: the basis for AI-informed decisions must be logged and retrievable, both for regulatory inspection and for customer redress.
Firms that treat AI governance as a compliance exercise to be completed once and filed will find the regulator dissatisfied. The expectation is for living governance — systems, processes, and accountability structures that evolve as the AI applications evolve.
We work with regulated firms on AI governance frameworks that align with Central Bank expectations. Get in touch to discuss your current position.