Only 3% of Microsoft 365 users pay for Copilot. Trust scores are negative. 44% of lapsed users cite distrust as their reason for stopping. The problem is not the tool — it is how organisations deployed it.
The numbers on Microsoft 365 Copilot adoption in 2026 tell a story that every Irish organisation with Copilot licences needs to hear.
Out of approximately 450 million commercial Microsoft 365 users worldwide, only 15 million pay for Copilot. That is a 3% conversion rate. Given that Copilot has been Microsoft's flagship AI product, backed by substantial marketing investment and integrated directly into the platform most enterprises already use, 3% is not a slow adoption curve. It is a stall.
The trust data is worse. Copilot's accuracy Net Promoter Score — which measures whether users trust the tool's outputs — was -19.8 in January 2026. That is not low confidence. That is active distrust. When given a choice between Copilot, ChatGPT, and Gemini, 76% of users choose ChatGPT as their primary tool. Copilot gets 18%.
And the most damning number: 44% of lapsed Copilot users cite distrust of answers as their primary reason for stopping use.
Why this happened
The diagnosis is not complicated. In most organisations, Copilot was deployed as an IT project:
- Licences were purchased based on vendor projections of productivity gains
- Licences were allocated — often broadly, without targeting specific roles or workflows
- Training sessions were delivered — typically generic Microsoft-led sessions
- Adoption was left to individual initiative
- No success metrics were defined
- No workflow-level assessment was conducted
This approach skipped the critical step: diagnosing which specific workflows benefit from AI assistance and designing adoption around those workflows. Copilot accelerates whatever workflows it is applied to. If those workflows are inefficient, poorly documented, or the wrong workflows for AI assistance, Copilot accelerates the wrong things — or produces outputs that are unhelpful enough to erode trust.
The ROI problem is real
A Gartner survey from 2025 found that only 6% of enterprises have successfully moved generative AI projects beyond the pilot phase and into production. For Copilot specifically, the ROI challenge is that the $30 per user per month price point requires measurable productivity gains to justify — and most organisations cannot demonstrate them.
The UK Government's own Copilot trial found that users saved an average of just 11 minutes per day. Many could not articulate how. At $360 per user per year, saving 11 minutes daily is a marginal return that most CFOs would struggle to sign off on at scale.
What the data actually tells us
The headline numbers paint a grim picture. But the underlying data is more nuanced. ROI is highly variable by role and use case:
- Senior analysts, researchers, and content professionals consistently show positive returns — because their work involves exactly the kind of synthesis, summarisation, and drafting that AI does well
- Developers using GitHub Copilot show strong adoption — because the tool integrates directly into their core workflow
- Broad organisational rollouts show neutral to negative returns — because most employees do not have workflows that benefit meaningfully from the tool as currently deployed
The problem is not that Copilot does not work. The problem is that it works for specific use cases, and most organisations deployed it broadly instead of targeting those use cases.
What to do about it
If your organisation has Copilot licences and is not seeing returns, the path forward is diagnostic, not more training.
1. Audit utilisation. Before anything else, understand who is actually using Copilot, how often, and for what. Most organisations discover that utilisation rates are below 40% — which means the majority of licence spend is waste.
2. Map workflows. Identify the specific workflows where Copilot is being used. Then assess whether those are the right workflows. The highest-value Copilot use cases are in meeting summarisation, email drafting, document synthesis, and data analysis — not in general browsing or casual Q&A.
3. Identify adoption barriers. The three main barriers are consistent: data governance concerns (employees worry about what data Copilot accesses), insufficient change management, and the absence of internal AI champions who can demonstrate effective workflows to colleagues.
4. Build a remediation plan. Reallocate licences from low-utilisation users to high-value use cases. Design workflow-specific adoption programmes. Set measurable success metrics. Review after 90 days.
This is exactly what our Copilot adoption diagnostic covers. The diagnostic is not about selling more Microsoft products — it is about determining whether the products you have already bought are deployed effectively, and building a plan to fix what is not working.
The independence advantage
Most Copilot advice comes from Microsoft partners — organisations with a commercial relationship to the platform and a structural incentive to recommend more adoption, not less. Our diagnostic is vendor-neutral by design. If the diagnostic reveals that Copilot is the wrong tool for specific workflows, we say so.
If your organisation has invested in Copilot and is not seeing the returns, contact Acuity AI Advisory for a structured diagnostic. We find out what is actually happening, why, and what to do about it.