← Insights
·5 min read

Non-Executive Directors and AI: What Effective Oversight Looks Like in Practice

G

Ger Perdisatt

Founder, Acuity AI Advisory

Non-executive directors are increasingly expected to provide meaningful oversight of AI systems. Most have not been given the frameworks, the information, or the training to do so. Here is what effective NED AI oversight actually requires.

The obligation on non-executive directors to provide meaningful oversight of AI has arrived ahead of most boards' readiness to deliver it. The EU AI Act places obligations on operator organisations — which include their boards. Regulators across financial services, healthcare, and the public sector are beginning to ask questions about AI governance that boards are expected to be able to answer. And the reputational consequences of AI failures that boards did not anticipate are becoming visible at a frequency that concentrates minds.

The practical question for most non-executive directors is not whether AI oversight is a board responsibility. It clearly is. The question is what effective oversight actually looks like — and how to deliver it without technical expertise that most NEDs do not have and cannot reasonably be expected to acquire.

The oversight gap — and why it is widening

Non-executive directors face a specific challenge with AI that differs from most other risk and governance domains. Technology risk, cybersecurity, regulatory compliance — these can be overseen at a level of principle without deep domain knowledge, because the underlying concepts are familiar and the reporting structures are well-established.

AI is different in two ways. First, the technical complexity of AI systems makes it genuinely difficult for non-specialists to evaluate management assertions about their safety, accuracy, and regulatory compliance. Second, the pace of AI adoption in most organisations is outrunning the development of reporting structures that would give NEDs visibility of what is happening.

The result is that many boards are ratifying AI-related decisions — budget approvals, strategic plans, risk appetites — without the information they would need to assess whether those decisions are sound.

What NEDs are not expected to do

Before describing what effective AI oversight looks like, it is worth being clear about what it does not require.

Non-executive directors are not expected to understand how large language models work. They are not expected to code, evaluate AI tools, or conduct their own risk assessments. They are not expected to be the organisation's AI experts.

What they are expected to do is what they do in every other governance domain: ask good questions, hold management accountable for clear answers, and escalate when the answers are not satisfactory.

The challenge with AI is knowing which questions to ask.

The five questions every NED should be asking

1. What AI systems are we operating, and have they been risk-classified?

A board that cannot get a clear, comprehensive answer to this question does not have adequate AI governance. The EU AI Act requires operators to understand what they are deploying. If management cannot produce a complete AI inventory with risk classifications, that is a governance gap that requires board attention.

2. Do we have a process for approving new AI tools before they are deployed?

AI adoption happens quickly. Without a defined approval process, the organisation's AI footprint grows without governance oversight. The board should understand what process management has in place and whether it is being followed consistently.

3. Are any of our AI systems classified as high-risk under the EU AI Act? What obligations attach to them?

High-risk AI systems carry substantive compliance obligations. If the organisation operates any — and many do, without realising it — the board needs to understand what those obligations are and whether they are being met.

4. Have there been any AI-related incidents, errors, or complaints in the past reporting period?

Incident reporting for AI should follow the same structure as incident reporting for any other operational risk. The board should receive regular reports on AI incidents, however minor, to understand how AI systems are performing in practice.

5. Who is accountable for AI governance, and how is that accountability defined?

Accountability without definition is not accountability. The board should be able to name the individual responsible for AI governance in the organisation and should understand how that responsibility is structured and what reporting flows from it.

Structuring board AI reporting

One of the most practical interventions a board can make is to ensure that AI governance appears as a standing item — not a one-off briefing — in regular reporting.

This does not require lengthy technical reports. An effective AI governance update to a board covers: any new AI systems approved for deployment, any updates to the EU AI Act compliance position, any AI incidents, and any material changes to the AI risk landscape.

This structure keeps the board informed without consuming disproportionate time and creates a documented record of board oversight — which is increasingly important from a regulatory perspective.

The role of external expertise

For boards that are starting from scratch on AI oversight, independent external expertise can accelerate the development of both framework and capability. The value of external advice in this context is not technical knowledge transfer — it is the ability to conduct an honest diagnostic of the organisation's current position and to benchmark it against what regulators and governance standards require.

Boards are well-served by advisers who have no commercial interest in the tools the organisation might adopt. Vendor-led AI briefings are structured to advance a purchase decision. Independent governance advice is structured to advance sound decision-making.


Acuity AI Advisory works with boards and non-executive directors on AI governance frameworks, EU AI Act readiness, and board AI briefings. If your board is looking to develop its AI oversight capability, the most useful starting point is a structured assessment of where you currently stand.

board advisoryai governance