← Insights
·4 min read

AI Risk Assessment: A Plain-English Guide for Irish Organisations

G

Ger Perdisatt

Founder, Acuity AI Advisory

An AI risk assessment is now a regulatory expectation for many Irish organisations. This guide explains what it involves, what it produces, and how to approach it without becoming an AI expert.

The term "AI risk assessment" is increasingly appearing in regulatory guidance, board papers and compliance frameworks across Ireland. The EU AI Act makes structured risk assessment a requirement for organisations deploying high-risk AI. But for most Irish organisations — particularly those outside the technology sector — the concept remains abstract. What does an AI risk assessment actually involve?

What an AI risk assessment is

An AI risk assessment is a structured review of the AI systems in use across an organisation, designed to identify, classify and manage the risks they create. It is not a technical audit of how AI algorithms work. It is an operational assessment of what AI is doing in the organisation, what decisions it influences, and what could go wrong.

A well-structured AI risk assessment produces four things:

  1. An inventory of AI systems in use — including tools many organisations do not realise count as AI
  2. A risk classification for each system, aligned with the EU AI Act's tiered framework
  3. A gap analysis identifying where current governance, oversight and documentation falls short
  4. A remediation roadmap with prioritised actions

What counts as AI for assessment purposes

The first challenge in any AI risk assessment is scope. The EU AI Act's definition of an AI system is broader than most organisations expect. It encompasses not just machine learning models but rule-based systems with sufficient adaptability, automated decision-making tools, and AI-assisted processes — including many tools embedded in standard business software.

For most Irish organisations, an honest AI inventory will include:

  • HR software with AI-assisted screening or performance tools
  • Customer-facing chatbots or virtual assistants
  • Microsoft Copilot or similar generative AI productivity tools
  • Credit scoring, fraud detection or AML tools (in financial services)
  • Document review or contract analysis tools (in legal and professional services)
  • Scheduling, planning or forecasting tools with predictive AI components

Many of these tools were not purchased as "AI" — they were purchased as software. They are AI for EU AI Act purposes regardless of how the vendor positioned them.

How risk is classified

The EU AI Act classifies AI systems into four risk tiers:

Unacceptable risk — prohibited uses, including social scoring systems, subliminal manipulation, and certain real-time biometric surveillance. No Irish organisation should be using these.

High-risk — listed in Annex III, including AI used in HR decisions, credit scoring, critical infrastructure, education, law enforcement, and administration of justice. High-risk systems carry the Act's most demanding compliance obligations.

Limited risk — AI systems with specific transparency obligations, including chatbots that must identify themselves as AI.

Minimal risk — the majority of AI tools in use, including most productivity AI, which carry no mandatory requirements under the Act (though good governance practice still applies).

Classifying your AI systems accurately is the essential starting point for understanding your compliance position.

What the assessment involves in practice

A structured AI risk assessment for a typical Irish organisation involves:

Discovery — identifying all AI systems in use, including tools deployed without central IT involvement

Classification — assessing each system against the EU AI Act's risk tiers and Annex III list

Gap analysis — comparing current governance, documentation and oversight against the obligations each risk tier creates

Vendor review — understanding what compliance commitments vendors have made and what deployer obligations remain

Remediation planning — sequencing the actions needed to close compliance gaps, prioritised by risk level and regulatory deadline

For most Irish organisations outside regulated sectors, this process can be completed in a focused engagement of two to four weeks.

Why doing this now matters

Ireland's AI Office becomes operational in August 2026. Organisations that have not completed a risk assessment by then will be conducting one in response to regulatory attention rather than ahead of it. The difference is significant — both in the time available to address gaps and in the reputational and legal exposure associated with being unprepared.

The organisations that navigate AI regulation well will be those that treated risk assessment as a governance discipline, not a compliance reaction.

Acuity AI Advisory provides structured AI risk assessments for Irish organisations across all sectors. Contact us to discuss where to start.

ai governance