Gartner predicts 40% of enterprise applications will include AI agents by end of 2026. The governance question — who is accountable when the agent acts? — has barely been asked in Irish boardrooms.
Let me be precise about what we are talking about, because the language around AI has become almost useless at this point.
Agentic AI is not a chatbot. It is not an assistant you prompt and review before anything happens. An agent is an AI system that acts autonomously. You give it a goal. It breaks that goal into steps. It takes actions — sending emails, querying databases, executing transactions, interacting with other systems — and reports back when done. The human sets the objective. The agent decides the steps, in real time, without waiting to be told what to do next.
Gartner predicts that 40% of enterprise applications will include task-specific AI agents by the end of 2026, up from under 5% in 2025. That is not a slow adoption curve. That is a governance event that most Irish boardrooms are not prepared for.
Why this is a different accountability question
Static AI tools — a document summariser, a credit scoring model, a chatbot fielding customer queries — have a legible accountability chain. A human inputs something. The tool produces an output. A human reviews it and decides. You can audit that sequence. You can point to where the human was in the loop.
Agentic systems compress or remove the human review step by design. That is the point of them. An agent authorised to process invoices, book meetings, or respond to customer queries will make thousands of decisions without a human reviewing each one. That is what makes them productive. It is also what makes the accountability question genuinely harder.
When one of those decisions causes harm — charges the wrong account, sends the wrong communication to the wrong person, denies a customer service they were entitled to — the chain of accountability becomes difficult to reconstruct. Who approved the action? The agent. Who authorised the agent to act in that domain? The IT or operations team. Who approved that deployment? Management. Who oversaw management's AI deployment decisions? The board.
Which brings it straight back to the boardroom.
The EU AI Act intersection
This is not just a theoretical governance question. Automated decision-making systems that affect people — in HR processes, credit decisions, customer services — are within scope of the EU AI Act's high-risk classification framework. An agent deployed in any of those contexts requires conformity documentation, human oversight mechanisms, and incident reporting processes. These are legal obligations, not best practice recommendations.
The AI Office of Ireland becomes operational from August 2026 and will have the authority to audit AI deployments. Boards approving agentic AI initiatives without asking the governance questions are not just creating operational risk. They are approving regulatory exposure — often without realising it, because the decision framing they received from management did not flag the compliance dimension.
I have sat in enough boardrooms to know that this is not about directors being careless. It is about management teams not yet having the frameworks to present these decisions correctly.
The questions boards are not yet asking
These are the six questions I would want answered before any agentic AI deployment came before a board or a risk committee for approval.
One. What actions is this agent authorised to take, and what is it explicitly prohibited from doing? The scope must be defined in writing, not implied.
Two. What is the human review mechanism? At what decision threshold — value, sensitivity, exception rate — does a human override or review before the agent acts?
Three. Where is the audit trail? If the agent makes a decision that causes harm six months from now, can we reconstruct what it did, in what sequence, and on what basis?
Four. What happens when the agent acts on incorrect or incomplete data? Who is notified, within what timeframe, and what is the remediation path?
Five. Has this deployment been assessed against the EU AI Act's risk classification framework? If it touches HR, credit, or customer-facing decisions, it almost certainly triggers high-risk obligations.
Six. Who in this organisation is accountable for the agent's outputs? Not who owns the tool. Not who manages the vendor relationship. Who is accountable for what the agent actually does?
If management cannot answer all six of these questions clearly, the deployment is not ready for approval.
The compounding risk
The single-agent deployment is not where the market is stopping. The direction of travel is multi-agent systems — networks of agents where the output of one becomes the input for another, with no human review at the handoff points. An agent that drafts a communication passes it to an agent that personalises it, which passes it to an agent that sends it. Each step is reasonable in isolation. The chain, operating without human review, is something different.
Errors compound in multi-agent systems. The audit trail becomes non-linear — harder to reconstruct, harder to explain to a regulator, harder to use as evidence that you exercised appropriate oversight.
The governance norms boards establish for single-agent deployments in 2026 will be applied — by analogy, by regulators, and by their own management teams — to multi-agent systems in 2027. Getting the framework right now matters more than the individual deployment decisions.
What proportionate governance actually looks like
I want to be clear: this is not an argument against agentic AI. I have seen the productivity case firsthand, and it is real. Agents handling routine invoice processing, scheduling, and standard customer communications genuinely free skilled people for higher-value work. The efficiency gains are not hype.
The argument is that deployment without governance is not efficiency. It is risk accumulation that is invisible until it crystallises.
Proportionate governance for an agentic AI deployment does not require a Chief AI Officer or a new board committee. It requires four things: a clear mandate defining what the agent can and cannot do; a human oversight trigger specifying when a human reviews before action is taken; an audit mechanism that records the agent's decisions in a way that is retrievable and interpretable; and a board-level reporting line so that agent-related incidents reach the right people, quickly.
None of that is technically complex. All of it requires a board that knows to ask the questions before sign-off rather than after an incident makes the questions unavoidable.
The organisations that deploy agentic AI well in 2026 will not be the ones that moved fastest. They will be the ones that built governance capable of satisfying a regulator who arrives with exactly these questions eighteen months later.