AI is now embedded in credit risk, fraud detection, and AML at most sizeable Irish financial institutions. Board oversight of these systems is a governance obligation, not a technical question to delegate permanently to management.
Artificial intelligence is not new in financial services risk functions. Statistical models have supported credit decisions for decades. What has changed is the complexity and opacity of the systems now being deployed, the breadth of their application, and the speed at which new use cases are being added. For boards, this creates a governance challenge that cannot be resolved by asking management whether the models work.
Where AI is being used
In credit risk, machine learning models now inform origination decisions, limit management, and early warning systems for deteriorating exposures. These models typically process a wider feature set than traditional scorecards and can identify patterns in payment behaviour that simpler models miss. They are also harder to explain, which creates tension with regulatory expectations around transparent and fair decision-making.
In fraud detection, AI systems are monitoring transaction patterns in real time, flagging anomalies for human review or triggering automated blocks. The operational benefit is real — faster detection, lower false negative rates. The governance question is who reviews the model's performance, how often, and what happens when it gets things wrong.
In AML, AI is used for transaction monitoring and customer risk scoring. This is high-risk territory under the EU AI Act, and it is an area where model bias — if a particular customer profile is disproportionately flagged — can create regulatory, legal, and reputational exposure.
What the board oversight questions are
Board members do not need to understand the mathematics of a gradient boosting model. They need to ask the right questions and be confident that management's answers are substantive rather than reassuring.
The core questions for any AI system used in consequential risk decisions are: What is the system deciding or recommending, and what are the consequences of a wrong output? Who owns the model, and who is responsible for its ongoing performance? How often is it validated, by whom, and what does that validation involve? What human oversight exists in practice — not in policy, in practice? What would we know if this model started performing badly, and how quickly would we know it?
For fraud and AML systems specifically: what is the false positive rate, and what is the process for customers who are incorrectly flagged? The answer to this question tells you a great deal about how seriously the firm has thought about the consumer protection implications of its AI systems.
Liability exposure
Boards of regulated firms carry personal accountability for governance failures. Where AI systems are making or informing decisions that harm customers — through discriminatory credit refusal, incorrect fraud flags, or erroneous AML actions — the board cannot claim it delegated the problem to the CRO. The obligation to oversee management's execution of risk functions extends to AI-driven risk functions.
This is not theoretical risk. The Central Bank has made clear through its supervisory communications that governance failures, including failures of board oversight, carry personal consequences for senior individuals.
What good board-level AI governance looks like
Good governance at board level is not about technical fluency. It is about establishing the right accountability structure and asking questions that require management to demonstrate, not just assert, that the systems work as intended.
Boards of financial services firms should receive periodic reporting on AI systems used in consequential decisions: model performance metrics, validation results, incident rates, and material changes. The risk committee is the natural home for this oversight, but the full board needs to understand the governance structure even if it delegates detailed monitoring.
Independent challenge — whether through a board AI advisory function or a skilled NED with relevant experience — is increasingly the way firms are ensuring that management presentations on AI receive genuine scrutiny rather than deference.
We support boards in building this oversight capability. See our board AI advisory service for how we work with financial services firms on these questions.