KPMG International and INSEAD launched their AI Board Governance Principles in April 2026. The framework is well-structured. But the firm publishing it has commercial relationships worth hundreds of millions with the technology vendors Irish boards are trying to govern. That structural reality matters.
In April 2026, KPMG International and the INSEAD Corporate Governance Centre published their AI Board Governance Principles — a ten-pillar framework designed to help boards exercise meaningful oversight of AI. This is a serious piece of work. INSEAD's academic rigour is genuine. KPMG's global reach means this framework will land on the desks of boards across Ireland and internationally within weeks. It deserves a careful read, not a reflexive dismissal.
I have read it carefully. I want to say what it gets right, because it gets a fair amount right. I also want to say what it structurally cannot deliver — because that gap is not a criticism of the framework's quality. It is a function of who published it.
What the framework gets right
The KPMG/INSEAD principles treat board AI governance as a substantive responsibility, not a checkbox exercise. That framing matters. Boards have been comfortable treating AI as a management and technology concern — something to receive briefings on, not something to actively govern. This framework pushes back on that comfort with some force, and it does so credibly because INSEAD's involvement gives it academic weight that a standalone consulting publication would not carry.
The ten-pillar structure — covering strategy, risk, ethics, data, oversight, accountability, transparency, human oversight, resilience, and continuous learning — covers real ground. As a structured checklist for the questions a board should be asking, it is comprehensive. I have sat in board meetings across several sectors. Most boards are not asking anything close to the full set of questions this framework surfaces. Starting from this document would represent genuine progress for the majority of Irish boards.
The timing is also right. The AI Office of Ireland becomes operational from August 2026. The Regulation of Artificial Intelligence Bill currently before the Oireachtas proposes penalties of up to 7% of worldwide annual turnover for serious violations. These are not future risks being managed in advance. They are imminent obligations that some Irish boards are not yet tracking at the level they need to. A framework that puts structured AI governance questions in front of boards in April 2026 is landing at precisely the moment those questions need to be asked.
The structural problem
KPMG has commercial partnership agreements with Microsoft, Google, and other major AI platform vendors that collectively run to hundreds of millions of euros annually. In the Irish and UK markets specifically, KPMG is an active Microsoft partner — advising on, implementing, and earning revenue from Copilot and Azure deployments. This is not a secret arrangement. It is published commercial positioning. The same structural reality applies to EY, Deloitte, and PwC, each of which has analogous vendor relationships.
I am not making an integrity argument about the individuals involved. The KPMG partners who worked on this framework are experienced governance professionals. The issue is not their integrity. The issue is what structural incentives do to advice over time, and to frameworks that are ultimately vehicles for advisory relationships.
When a board asks its KPMG adviser — after reading the KPMG/INSEAD framework and feeling well-briefed — "should we deploy Microsoft Copilot across the organisation?" or "is our current AI vendor the right choice for our risk profile?", the adviser's answer is shaped, consciously or not, by a commercial relationship the board may not fully see. That is not a theoretical risk. It is a structural feature of how the Big 4 operate in the AI market.
The specific governance gap this creates
The KPMG/INSEAD framework is excellent on the questions boards should ask about AI in general. It is less useful — and structurally unable to be fully useful — on the questions boards should ask about specific vendors. Which is precisely where the money is being spent.
An Irish board approving a multi-year Microsoft 365 Copilot deployment, a Salesforce AI implementation, or a ServiceNow agentic AI rollout needs advice from someone who has no revenue interest in any of those outcomes. The framework does not provide that. It cannot, given the commercial architecture of the firm publishing it.
This is not a gap the framework could fill even if it tried. You cannot resolve a structural conflict of interest with better disclosure language or more comprehensive pillar coverage. The conflict exists at the level of the firm's commercial model, not at the level of the document's content.
What this means for Irish boards in practice
Use the KPMG/INSEAD principles as a useful structural checklist for the questions your board should be asking. The ten-pillar framework is a genuinely useful starting point for a board AI governance agenda — read it for that purpose.
Do not rely on your firm's consulting arm — whether Big 4 or technology-aligned consultancy — for vendor-selection or vendor-evaluation advice on the same platforms they implement and earn from. Those two engagements cannot sit with the same adviser. The conflict is not manageable through good intentions.
Ask any AI governance adviser, before you engage them: what commercial relationships do you have with AI vendors? What financial interest do you have in the outcome of your recommendation? If the answer is unclear, or qualified with reference to "independence within our firm's existing relationships," the advice will carry the same qualification.
The NED perspective
I sit on boards. I am a non-executive director at Dublin Airport Authority and Tailte Éireann. I understand what a board is actually capable of processing in the time available, and I understand the difference between a framework that arrives on the agenda and a board that is genuinely exercising oversight.
A ten-pillar framework is useful as a reference document. It is not, by itself, what a board needs. What a board needs is a clear picture of what AI systems the organisation is currently running; a frank assessment of which of those carry regulatory risk under the EU AI Act and the Regulation of Artificial Intelligence Bill; and an adviser who has no financial stake in the answer. That last requirement is not a nice-to-have. It is the governance condition that makes the first two trustworthy.
Boards are not well served by frameworks that create the appearance of rigour while leaving the most consequential question — who is advising us, and what are their incentives — unanswered.
The bottom line
The KPMG and INSEAD principles are a welcome contribution to a conversation that needed more rigour. Read them. Use the framework as the structured question-set it is designed to be. The ten pillars represent a genuine advance on how most boards are currently approaching AI oversight.
But be clear about what a framework published by a firm with significant AI vendor partnerships can and cannot deliver. For the question "what should your board ask about AI?" — the answer is largely in the document. For the question "which AI approach or vendor is right for your organisation?" — that answer needs to come from somewhere without a stake in the outcome.
That distinction is not a marketing point. It is the governance condition your board should be applying to every piece of AI advice it receives, including this one.