Non-executive directors cannot outsource AI oversight to the IT function or take management's assurances at face value. These are the ten questions that separate adequate AI governance from a liability gap.
Most boards are behind on AI governance. Not dramatically behind — most have had some level of management presentation on AI strategy. But there is a significant gap between a board that has heard an AI briefing and a board that is exercising genuine oversight of AI risk.
The questions below are designed to close that gap. They are direct, specific, and intended to produce answers that are either reassuring or immediately actionable. A management team that cannot answer them is telling the board something important.
The ten questions
1. What AI systems are we currently operating, and where is the inventory? The starting point for any AI governance framework is a complete picture of what is in use. Not what was approved, not what IT knows about — what is actually running. Shadow AI adoption (employees using unapproved tools with company data) is widespread. The board should expect to see a maintained inventory, not a narrative assurance.
2. Has each system been risk-classified under the EU AI Act framework? The Act's four-tier framework — prohibited, high-risk, limited risk, minimal risk — applies to organisations as deployers, not just developers. Boards need to know whether that classification has been done and by whom. See our guidance on EU AI Act compliance.
3. Who owns AI governance in this organisation? Ownership must be named and documented. "The IT function" or "the CTO" is insufficient — AI governance touches legal, compliance, HR, operations, and board accountability. The board needs to see a governance structure with clear lines of accountability.
4. What is the process for approving a new AI tool before deployment? There should be a defined process. If management cannot describe it clearly, there is not one — and new AI tools are being deployed on management discretion without structured risk assessment.
5. What data are our AI systems processing, and where does it go? This question surfaces two risks: data protection obligations (GDPR intersects significantly with AI), and confidentiality exposure from data uploaded to third-party AI platforms. The board does not need technical detail. It needs assurance that someone has asked and answered this question for every system in use.
6. Are any of our AI applications safety-critical or high-risk under regulatory frameworks? High-risk AI systems under the EU AI Act — those used in employment decisions, credit assessment, critical infrastructure management, or safety monitoring — carry significant compliance obligations. The board should know if the organisation is operating any such systems and what the compliance status is.
7. What is the incident reporting process if an AI system produces a harmful or erroneous output? Errors will happen. The question is whether the organisation has a process for identifying them, escalating them, and reporting them where regulation requires. A board that hears about an AI-related incident through a regulator or media inquiry rather than through management has a governance failure on its hands.
8. Have we reviewed our professional indemnity and liability insurance in light of AI use? Insurance policies written two or three years ago were not written with AI-assisted professional outputs in mind. Boards should expect confirmation that coverage has been reviewed and that any gaps have been addressed.
9. What training have staff received on the responsible use of AI tools? Deploying AI tools without training staff on their limitations, appropriate use, and the organisation's policies creates liability exposure. The board should see evidence of training, not just a policy document.
10. What will we be reporting to regulators, and by when? The EU AI Act and associated national enforcement mechanisms will require reporting and documentation from deployers of regulated AI systems. The board needs to understand what compliance deadlines apply and whether the organisation is on track.
How to use this list
These are not trick questions, and a management team with a functioning AI governance framework should be able to answer all of them. The value of the list is in the gaps — the questions that produce hesitation, deflection, or a request to come back at the next meeting. Those gaps are where the board's attention belongs.
We advise boards and non-executive directors on AI governance frameworks, director obligations, and EU AI Act compliance. If you want to understand where your board stands, talk to us.