Irish insurers are using AI in underwriting, claims, and fraud detection. Some of these applications are high-risk under the EU AI Act. Many firms are not yet governance-ready for what that means.
Insurance was an early adopter of predictive analytics, so the shift to AI has felt more like evolution than disruption for many firms. Underwriting models, claims scoring systems, and fraud detection have used statistical methods for years. What has changed is the sophistication of those models, the volume of data feeding them, and the regulatory scrutiny they now attract.
Where AI is being applied
In underwriting, AI models are being used to price risk at a more granular level — processing combinations of variables that traditional actuarial models would handle less precisely. Motor and home insurance pricing have been early application areas. Commercial lines and specialist risks are following. The benefit to the insurer is more accurate pricing; the risk is that the model is encoding factors that are either discriminatory or difficult to explain under consumer protection obligations.
In claims, AI is being used to triage incoming claims, assess documentation, flag anomalies, and in some cases make preliminary settlement recommendations. Automated claims processing can reduce handling time and cost significantly. It also introduces new quality control requirements — the insurer remains responsible for the outcome of a claim regardless of how the assessment was generated.
In fraud detection, AI monitors claim patterns to identify indicators of potential fraud. This is high-value — insurance fraud costs the Irish market hundreds of millions annually — but it is also the area of highest regulatory sensitivity. A fraud detection system that disproportionately flags certain customer profiles creates discrimination risk and consumer harm even if the overall fraud detection performance is strong.
High-risk classification under the EU AI Act
Several insurance AI applications fall into high-risk categories under the EU AI Act. AI used in life and health insurance risk assessment is explicitly listed under Annex III. AI that influences access to financial services products — including general insurance in certain contexts — also warrants careful analysis.
High-risk classification carries specific obligations: data governance documentation, technical robustness testing, transparency provisions, human oversight requirements, and registration in the EU database. For insurers operating at scale with multiple AI systems, understanding which applications trigger these obligations is a non-trivial exercise that requires input from both legal and technical teams.
What the Central Bank expects
The Central Bank of Ireland supervises insurance firms under Solvency II, which includes requirements around model governance that already apply to actuarial and risk models. The extension of these expectations to AI is consistent with the regulator's general stance: new technology does not reduce existing governance obligations.
Central Bank supervision is increasingly examining how AI systems are governed at board and senior management level. Consumer protection obligations under the revised Consumer Protection Code require that automated decisions affecting customers be explainable and reviewable. Insurers whose claims or underwriting AI cannot meet this standard have a compliance problem, not just a technical one.
Governance gaps common in Irish insurance firms
In practice, the most common gaps we see in Irish insurance AI governance are: absence of a model inventory covering AI systems (the actuarial model register and the AI governance register are often maintained separately, or the latter does not exist); validation processes that were designed for traditional statistical models and have not been adapted for machine learning; human oversight processes that exist on paper but where the override rate suggests they are not functioning as intended; and third-party AI systems where the governance documentation exists at the vendor but the deploying insurer has not verified it or integrated it into their own framework.
None of these gaps are insurmountable. Most can be addressed through a structured audit and remediation programme. The question is whether to do that work before the August 2026 EU AI Act deadline or after it.
We work with regulated firms on exactly this. Get in touch to discuss where your AI governance programme stands.