← Insights
·3 min read

AI Governance Frameworks for Irish Boards: What Good Looks Like

G

Ger Perdisatt

Founder, Acuity AI Advisory

A functioning AI governance framework is not a policy document and a delegation to the IT function. Here is what adequate board-level AI governance actually requires in 2026.

Most Irish boards now accept that AI governance is something they need to be doing. Fewer have a clear picture of what that actually means in practice. The gap between "we have an AI policy" and "we have a functioning AI governance framework" is substantial — and in the current regulatory environment, the gap carries real consequences.

This piece describes what good looks like, across the components that a board-level framework needs to address.

Oversight mechanisms

An AI governance framework starts with visibility. The board cannot oversee what it cannot see. This means the organisation needs a maintained AI system inventory — a structured register of every AI tool in use, its purpose, the data it processes, its risk classification, and who owns it. That inventory needs to be a living document, not a one-time exercise.

Board visibility also requires regular reporting. Management should be providing the board with periodic updates on AI deployment activity — new tools approved or rejected, incidents or near-misses, audit findings, and regulatory developments that affect the organisation's compliance position. The frequency depends on the organisation's AI use profile, but quarterly reporting to the board is a reasonable minimum for an organisation with meaningful AI exposure.

Escalation triggers

A functioning framework defines in advance which categories of AI decision require board or committee-level approval rather than management discretion. The threshold for escalation should be calibrated to the organisation's risk appetite, but typically includes: deployment of any high-risk AI system as defined under the EU AI Act; AI use in any process where a harmful output could expose the organisation to material legal or regulatory liability; and any AI application that processes personal data of customers, employees, or other individuals at scale.

Without defined escalation triggers, decisions that should reach the board do not. Management exercising discretion on AI deployments in the absence of a clear framework is not necessarily acting in bad faith — it is filling a governance vacuum that the board has not addressed.

Management reporting requirements

The board should specify what it wants to hear from management, not wait to receive whatever management chooses to report. A minimum reporting specification for AI governance includes: a summary of AI system inventory changes since the last report; any incidents, errors, or regulatory correspondence involving AI systems; progress against the EU AI Act compliance roadmap; and any material AI-related expenditure or contractual commitments.

Reporting templates, however imperfect, are better than freeform narrative. They create accountability against consistent criteria and make it easier to track changes over time.

Director education

Effective AI governance requires board members who understand enough about AI to exercise informed oversight. This does not mean directors need to understand how large language models work. It means they need to understand the risk landscape well enough to ask the right questions and recognise an inadequate answer.

Regular director education on AI — not a one-time briefing but ongoing engagement as the technology and regulatory environment evolves — is a component of adequate governance. Boards that have not had a structured AI education session in the past twelve months are likely working with an outdated model of AI risk.

What inadequate looks like

Inadequate AI governance has common patterns. An AI policy document that has not been reviewed since it was written. A delegation to "the IT function" that IT has interpreted as discretionary. No defined escalation triggers. Management presentations on AI strategy that focus on opportunity and say little about risk classification or compliance. No requirement for regular reporting to the board on AI activity.

These patterns are not the exception in Irish organisations. They are the norm. The question for boards is how quickly they move from that baseline to something that would withstand regulatory scrutiny.


We work with Irish boards to build AI governance frameworks that are practical, proportionate, and adequate to current regulatory requirements. Talk to us about where your board currently stands.

ai governanceboard advisory