AI Risk FAQ
What is an AI risk register?
Quick answer
An AI risk register is a structured record of the AI risks an organisation faces — identifying each AI system in use, the risks it carries, the likelihood and impact of those risks materialising, and the controls in place to manage them. A board-level AI risk register is different from an IT system log: it classifies systems against the EU AI Act’s risk tiers, maps regulatory obligations, and enables meaningful oversight. Without a risk register, boards and senior leaders are governing AI without a factual basis.
What an AI risk register contains
An AI risk register contains several elements for each AI system in use. The system identification: what is the AI system, who provides it, and what does it do? The EU AI Act classification: is this system prohibited, high-risk, limited-risk, or minimal-risk? The risk assessment: what specific risks does this system carry in this specific context — hallucination risk, data exposure, bias, accountability gaps? The likelihood and impact assessment: how likely is each risk to materialise, and what would the impact be? The controls in place: what mitigations are operating — human oversight protocols, data handling policies, verification requirements? The residual risk: after controls, what risk remains? And the owner: who is accountable for monitoring and managing this specific AI system? A register that answers these questions for every AI system in use gives the board and senior management a factual basis for oversight.
Who maintains it and how it is used
The AI risk register should be owned by a named individual — typically the CEO, a designated AI lead, or a risk function lead — and reviewed regularly. In practice, many Irish organisations have not yet built a formal AI risk register, which means their board and leadership have no comprehensive view of AI risk exposure. The process of building the register is itself valuable: it typically surfaces AI systems that leadership was unaware of, identifies unmanaged risks that were assumed to be controlled, and reveals EU AI Act classification issues that require remediation. Once built, the register should be a living document — updated when new AI systems are adopted, when existing systems change, or when new risks are identified. It should be reported to the board on a regular basis, as a standing item in AI governance reporting.
Acuity AI Advisory builds AI risk registers for Irish organisations as part of its governance framework engagements. See our AI governance services.