AI Risk FAQ

How do you assess AI risk?

Quick answer

AI risk assessment involves four steps: identify all AI in use (including AI embedded in existing software platforms); classify each system against the EU AI Act’s risk tier framework; assess the specific risks in context (what harm could occur, how likely, how severe); and identify the controls required to manage those risks to an acceptable level. For high-risk AI systems under the EU AI Act, risk assessment is not optional — it is a prerequisite for deployment.

The four-step AI risk assessment process

Step one is identification: building a complete inventory of every AI system in use across the organisation. This includes standalone AI tools purchased specifically for the purpose (ChatGPT Enterprise, Copilot, etc.), AI features embedded in existing software platforms (AI in CRM systems, HR platforms, accounting software, email tools), and AI systems built or customised internally. Most organisations find this step surprising: they are using substantially more AI than their leadership team is aware of. Step two is classification: applying the EU AI Act’s risk tier framework to each identified system. Is the system prohibited? High-risk (used in employment, credit, healthcare, education, or other listed contexts)? Limited-risk (subject to transparency obligations)? Minimal-risk? Step three is contextual risk assessment: for each system, what are the specific risks in this organisation’s context — what harm could occur, how likely, how severe, to whom? Step four is control identification: what mitigations are required to manage each identified risk?

Common gaps in AI risk assessment

Several gaps recur in AI risk assessments conducted for Irish organisations. The most common is an incomplete inventory: organisations assess the AI tools they know about and miss the AI embedded in software they use for other purposes. The second is classification errors: misclassifying high-risk AI as limited-risk because the primary use of the platform is not the AI feature, or because the AI feature is presented as a convenience rather than a decision tool. The third is context insensitivity: applying generic risk descriptions rather than assessing the specific risks in the organisation’s actual context — a generic hallucination risk description for a law firm that is actually using AI to draft client-facing legal documents underestimates the specific harm. The fourth is missing the EU Act Act implications: assessing operational risk without assessing regulatory compliance risk leaves the most consequential exposure unaddressed.

Acuity AI Advisory conducts AI risk assessments for Irish organisations — covering inventory, classification, contextual risk, and controls. See our AI governance services.