← Insights
·5 min read

How to Set Up an AI Governance Committee: Structure, Roles and Terms of Reference

G

Ger Perdisatt

Founder, Acuity AI Advisory

An AI governance committee that is not properly structured will become a bureaucratic bottleneck or, worse, a governance theatre exercise. Here is how to set one up so that it actually works.

As AI adoption accelerates in Irish organisations, the AI governance committee has become an increasingly common institutional response. The idea is sound: bring together the right people, give them the mandate and information they need, and create an accountable mechanism for overseeing AI across the organisation.

The problem is execution. Most AI governance committees are established reactively — often in response to a regulatory requirement or an incident — and without the structural clarity that would make them functional. They end up with the wrong membership, unclear decision rights, insufficient information flows, and no clear relationship to the board.

This piece sets out how to design an AI governance committee that actually fulfils its purpose.

Start with the mandate

The most important structural decision is the committee's mandate. What is it there to do?

A functional AI governance committee needs a clear, bounded mandate that distinguishes it from: the board (which has ultimate accountability for AI governance), management (which has day-to-day operational responsibility), and IT or digital functions (which have technical implementation responsibility).

The committee's mandate should cover:

  • Oversight of the AI inventory — maintaining an accurate record of all AI systems in use, their risk classifications, and their compliance status
  • Approval of new AI systems — reviewing and approving new AI tools before deployment, particularly those with a high-risk classification
  • Compliance monitoring — tracking the organisation's compliance position under the EU AI Act and relevant sector regulation
  • Incident review — receiving reports of AI-related incidents and overseeing the response
  • Reporting to the board — providing regular structured updates to the board on the organisation's AI governance position

What the committee should not do: make operational decisions about which AI tools to procure (that is management's role), approve technology budgets (that is the CFO and board's role), or conduct technical assessments of AI systems (that is specialist work that should be commissioned rather than performed by the committee).

Membership

The composition of the committee should reflect the oversight mandate, not the implementation interest. The most common error is over-representing technology and under-representing governance, legal, and risk.

A functional membership for most Irish organisations will include:

Chair — Ideally a senior independent voice: a non-executive director, the CRO, or the General Counsel. Someone who can challenge management on AI governance without the conflicting interest of owning the technology implementation.

Legal and compliance — Given the EU AI Act obligations and the GDPR implications of AI processing, legal and compliance representation is not optional. This role should have genuine authority to escalate concerns rather than simply noting them.

Risk — AI systems create risk across multiple dimensions: operational, regulatory, reputational, strategic. The CRO or Head of Risk should either be a committee member or have a defined mechanism for input.

Human resources — AI use in employment-related processes (recruitment, performance management, workforce planning) is one of the most significant high-risk exposure areas. HR should be represented, particularly if AI tools are used in these workflows.

Data protection — The Data Protection Officer should be a standing member or a standing invitee. AI processing and data protection obligations are inseparable.

Business line representation — Rotating representation from the business lines most actively using AI tools provides the committee with operational context and prevents it from becoming detached from practice.

Technology advisory — Technical expertise is necessary to evaluate AI systems, but it should be advisory rather than decision-making. The CTO or Head of IT should participate, but the committee's authority should rest with governance, risk, and legal voices.

Terms of reference

The committee needs documented terms of reference that cover its mandate, membership, meeting frequency, quorum requirements, decision-making process, and reporting lines.

On reporting lines, clarity is essential. The committee should report to the board — either directly or through the audit and risk committee — not to management. If the committee reports to the CEO or COO, its independence is compromised: it becomes an internal management review body rather than an accountability mechanism.

On decision-making: define clearly what the committee can decide and what it can only recommend. For high-risk AI systems, the committee might have authority to approve deployment — but the board should ratify the framework within which that approval operates, and the board should retain authority to override or escalate.

Information flows

A committee that does not receive adequate information will not be able to exercise effective oversight. The information flows into the committee need to be defined as clearly as the committee's mandate.

The committee should receive, on a regular basis:

  • An updated AI inventory with risk classifications and compliance status
  • A log of new AI tools proposed for deployment, with management's assessment and recommendation
  • A report on any AI incidents in the period, including near-misses
  • An update on the organisation's EU AI Act compliance position, particularly against the applicable phase-in deadlines
  • Any relevant regulatory developments or guidance

This information should be produced by management and reviewed by the committee — not produced by the committee itself.

Common failure modes

The most common reasons AI governance committees fail to function effectively:

Membership too technical. Technology-heavy committees spend their time on implementation questions and lack the governance and legal authority to challenge management on AI risk.

Insufficient frequency. AI adoption moves quickly. Committees that meet quarterly will always be reviewing decisions that have already been implemented.

No authority to say no. A committee that can only recommend, but cannot stop a high-risk AI deployment, is not exercising meaningful oversight. Some decisions need a hard stop pending committee approval.

Disconnected from the board. Without a clear reporting line to the board, the committee's findings do not reach the level of accountability where they have effect.

Operational rather than strategic focus. Committees that spend their time reviewing individual tools rather than the governance framework they operate within will never get ahead of the risk.


Acuity AI Advisory supports Irish organisations in designing and establishing AI governance committees, including mandate definition, terms of reference, and the reporting frameworks that make them functional. If your organisation is at the stage of setting up or restructuring its AI governance, we would welcome a conversation.

ai governance