← Insights
·5 min read

Agentic AI Is Here: Why Your Governance Framework Needs an Upgrade

G

Ger Perdisatt

Founder, Acuity AI Advisory

AI systems that do things — not just say things — are entering enterprise workflows. Agentic AI requires execution governance, not just data protection. Most governance frameworks are not built for this. Here is what needs to change.

There is a category distinction in AI that most governance frameworks have not caught up with. Until recently, enterprise AI was primarily generative — it produced text, summaries, analyses. It said things. Organisations governed it accordingly, focusing on data protection, accuracy, and acceptable use.

Agentic AI is different. Agentic AI systems do not just generate content. They take actions. They send emails, update databases, process transactions, schedule meetings, manage workflows, and interact with other systems — often without human intervention for each step.

This is not a future development. Agentic AI is in enterprise workflows now. Microsoft's Copilot Studio, Salesforce's Agentforce, and dozens of other platforms are deploying AI agents that act autonomously within defined parameters. Gartner forecasts that 40% of enterprise applications will integrate task-specific AI agents by end of 2026.

The governance implications are fundamental. And most Irish organisations have not addressed them.

Why agentic AI breaks existing governance

Traditional AI governance frameworks were designed for systems that inform human decisions. The human remains in the loop — reviewing AI-generated analysis, accepting or rejecting recommendations, maintaining control over actions.

Agentic AI removes the human from parts of that loop. The AI agent receives an instruction, plans a course of action, uses tools to execute it, and may chain multiple actions together before returning a result to a human. The governance challenge is not just what the agent says — it is what the agent does.

OWASP has published a Top 10 for Agentic Applications in 2026, identifying specific risks including:

  • Tool misuse and exploitation — agents using tools in unintended ways
  • Excessive permissions — agents with broader access than their task requires
  • Agent sprawl — multiple agents deployed across business units without central oversight
  • Unintended action chains — agents executing sequences of actions that produce harmful outcomes
  • Lack of observability — organisations unable to audit what agents did and why

These risks require a different governance model from the one most organisations have built.

Execution governance vs. data protection

The shift from generative to agentic AI requires a corresponding shift in governance focus:

| Generative AI Governance | Agentic AI Governance | |---|---| | What data does the AI access? | What actions can the AI take? | | Is the output accurate? | Did the execution achieve the intended outcome? | | Who reviews the AI's recommendations? | Who oversees the AI's actions in real time? | | What happens if the output is wrong? | What happens if the action is irreversible? | | Acceptable use policies | Execution boundaries and permissions |

This is not an either/or. Organisations need both. But most AI governance frameworks were designed for the left column. The right column requires new structures.

What governance for agentic AI looks like

1. Agent identity and access management. Every AI agent should be treated as a privileged application — with a clear identity, scoped permissions, and access controls that are reviewed regularly. An agent that can read customer data should not automatically be able to modify it. Permissions should follow the principle of least privilege.

2. Execution boundaries. Define what each agent is authorised to do — and explicitly what it is not. These boundaries should be technical (enforced through system controls) and procedural (documented in governance policies). An agent authorised to draft customer communications should not be authorised to send them without human approval.

3. Observability and audit trails. Every action an agent takes must be logged and auditable. If an agent updated a customer record, sent an email, or modified a workflow, there must be a record of what happened, when, why, and what triggered the action. Without observability, governance is impossible.

4. Human override mechanisms. Critical actions require human approval gates — points in the execution chain where a human must review and approve before the agent proceeds. The definition of "critical" varies by context, but financial transactions, customer communications, and data modifications are typical candidates.

5. Agent lifecycle governance. Agents should have defined lifecycles — creation, deployment, monitoring, review, and retirement. The shadow AI problem is amplified with agents: an ungoverned chatbot is a data risk, but an ungoverned agent with system access is an operational risk.

The EU AI Act dimension

The EU AI Act does not specifically address agentic AI as a category — it was drafted before agentic systems reached enterprise maturity. But the Act's high-risk framework applies based on the domain of use, not the type of AI architecture. An agentic AI system that makes employment decisions, processes credit applications, or operates in healthcare is high-risk regardless of whether it is generative or agentic.

For Irish organisations subject to the Act, this means agentic AI systems used in high-risk domains require the same compliance infrastructure: risk assessment, human oversight, documentation, and regulatory readiness. The fact that the system is agentic — taking actions rather than just producing outputs — makes the human oversight requirement more demanding, not less.

The board conversation

If your board has had a conversation about AI governance, it was almost certainly about generative AI — the risks of ChatGPT, the accuracy of AI-generated content, the data protection implications. That conversation needs to expand.

Board AI oversight must now include: what AI agents are deployed across the organisation? What actions can they take? What permissions do they have? What oversight mechanisms are in place? Who is accountable if an agent takes an action that causes harm?

These are not theoretical questions. They are operational governance questions that require operational answers.


If your AI governance framework was designed for generative AI and needs to be extended for agentic AI, contact Acuity AI Advisory for a structured assessment of your governance readiness.

ai governance