Board Governance
AI Risk Register for Boards
A board-level AI risk register is not an IT risk log. It is the governance instrument that enables directors to exercise informed oversight of AI risk — independently of management.
A board-level AI risk register is not the same as an IT risk log. It identifies the AI systems the organisation operates or relies upon, classifies them under the EU AI Act's risk tiers, and maps the governance obligations that apply. The board uses it to exercise oversight — to ask the right questions of management rather than to manage AI operationally.
What a board-level AI risk register is for
The distinction between board-level risk oversight and management-level risk management is fundamental to good governance — and it is consistently blurred in the AI context. Boards are not responsible for managing AI risk. They are responsible for ensuring that management manages it effectively, within a risk appetite the board has set, and with governance structures the board has approved.
An AI risk register at board level serves a specific purpose: it gives directors a structured, accurate picture of the organisation's AI exposure that is not dependent on management's own assessment of that exposure. It is the instrument through which the board exercises independent oversight — asking whether classifications are accurate, whether governance gaps are being closed, and whether the organisation's AI risk appetite is being respected in practice.
Most organisations do not have an AI risk register at all — they have AI mentions in general IT risk logs, which do not provide the classification detail or governance mapping that board-level oversight requires. The EU AI Act has changed the expectation: boards are now expected to be able to demonstrate substantive governance of AI risk, not just awareness of it.
Director liability provides the practical incentive. Where an AI system causes material harm — to customers, employees, or third parties — the accountability question reaches the board. Directors who can demonstrate they reviewed and approved a robust AI risk register, asked substantive questions of management, and exercised documented oversight are in a categorically different position from those who cannot.
EU AI Act risk tiers as the classification foundation
The EU AI Act's four-tier risk classification — unacceptable risk (prohibited), high-risk, limited risk, and minimal risk — provides the governance framework for an AI risk register. Each tier carries a defined set of obligations. The board-level register maps the organisation's AI systems to these tiers and documents whether the corresponding obligations are met.
For organisations in regulated sectors — financial services, healthcare, legal, public sector — the classification exercise will identify multiple high-risk applications. The register makes the governance obligations attached to those classifications visible at board level, rather than leaving them embedded in management-level technical documentation.
What we deliver
- AI system inventory across the organisation
- EU AI Act risk tier classification for each system
- Governance obligations mapped to each risk classification
- Current compliance status assessment and gap identification
- Accountability assignment for each AI system
- Board reporting template for ongoing AI risk oversight
- Review and escalation schedule for maintaining the register
Why Acuity AI Advisory
Ger Perdisatt is a current NED at Dublin Airport Authority and Tailte Éireann — he builds and uses board governance instruments as part of his current NED responsibilities, not as a theoretical exercise. The AI risk register framework is designed for boards, by someone who sits on boards.
Former Microsoft COO experience ensures the register reflects how AI systems actually work and what risks they actually carry — not what vendors say they carry. Acuity AI Advisory is vendor-neutral and fixed-fee.
Common questions
What should an AI risk register contain?
A board-level AI risk register should contain: an inventory of all AI systems the organisation operates, deploys, or materially relies upon; the EU AI Act risk classification for each system (unacceptable, high-risk, limited-risk, minimal-risk); the governance obligations that apply at each risk level; the current governance status (compliant, partially addressed, gap identified); the accountable individual for each system; and the board's risk appetite position. The register is a governance instrument, not a technical document — it is designed to give the board a structured view of AI risk that enables informed oversight.
Who is responsible for maintaining an AI risk register?
Maintaining the AI risk register operationally is a management responsibility — typically the Chief Risk Officer, General Counsel, or a designated AI governance owner. The board's responsibility is to ensure that a register exists, that it is accurate and current, and that it is reviewed at appropriate intervals. The board should receive a summary report from management at least annually, with material changes — new high-risk AI adoption, significant incidents, regulatory developments — escalated outside the regular cycle. The board does not maintain the register; it exercises oversight of the management function that does.
How does an AI risk register relate to EU AI Act compliance?
The EU AI Act requires organisations deploying high-risk AI systems to maintain documented governance including risk assessments, technical documentation, and oversight records. An AI risk register is the board-level governance instrument that provides the oversight framework within which these management-level requirements are met. The register itself is evidence of board engagement with AI risk — evidence that regulators and inspectors will expect to see. An organisation that can produce a board-approved AI risk register with current classifications and governance status is demonstrably further ahead on compliance than one that cannot.
Build an AI Risk Register for Your Board
Fixed-fee. Board-ready. EU AI Act classification built in.