← Insights
·4 min read

Why Most AI Projects Fail Before They Start

G

Ger Perdisatt

Founder, Acuity AI Advisory

The failure mode in most AI projects is not technical. It is diagnostic — or rather, the absence of it.

Most AI projects do not fail during implementation. They fail in the week before implementation starts — in the moment when someone decides to adopt a tool before they have defined the problem it is meant to solve.

This is not a technology problem. It is a sequencing problem.

The tool-first trap

The AI tools market is structured to encourage tool-first thinking. Vendors demonstrate capabilities. Use cases are presented. The implicit logic is: find a use case for this capability and you will benefit.

The problem is that this reverses the correct order of reasoning. The right sequence is:

  1. Identify a real problem with a measurable cost
  2. Diagnose why the problem exists
  3. Determine whether AI addresses the root cause
  4. Select the tool that matches the requirement

When organisations skip steps one and two, they acquire tools that address symptoms rather than causes. The tool works as advertised. The problem remains. The conclusion is usually that AI does not deliver — when the real conclusion is that the wrong problem was being solved.

What diagnostic thinking looks like in practice

We worked with an organisation that had spent significant resources on an AI-powered workflow automation tool. Six months after implementation, productivity had not improved. Usage was low. The tool was technically functional.

A diagnostic of the actual workflow revealed the following: the primary bottleneck was not in the tasks the tool automated. It was in the approval process upstream of those tasks — a process that involved three people in two jurisdictions and had no defined response time. The automation was making the wrong thing faster.

The resolution cost nothing. A revised approval protocol, agreed in two meetings, eliminated the bottleneck. Productivity improved materially within four weeks. The expensive tool became a secondary instrument in a workflow that now functioned correctly.

This pattern — automation applied to the wrong point in a process — is the most common failure mode we observe. It is almost always invisible before the diagnostic.

Why organisations skip the diagnostic

There are several reasons, all understandable:

Time pressure. The sense that competitors are moving fast creates pressure to act before thinking. In practice, the organisations that move thoughtfully tend to outperform those that move first.

Vendor influence. AI tools vendors have both the budget and the incentive to be present early in the decision-making process. A well-structured vendor presentation is a persuasive thing. It is also structured to lead toward a purchase, not toward a problem definition.

Optimism about technology. There is a reasonable prior belief that AI tools are powerful and will create value. That belief is correct in many cases. But general capability does not translate automatically into specific organisational benefit.

Absence of a diagnostic framework. Most organisations do not have a structured methodology for evaluating AI adoption. Without a framework, the diagnostic step gets compressed into a brief internal discussion and a technology evaluation.

The cost of the wrong AI project

The cost of a failed AI project is rarely the licence fee. It is the management time spent championing the tool, the team time spent on implementation, the organisational credibility spent on a project that does not deliver, and the delay to the actual solution.

In regulated organisations, there is also the compliance cost of deploying a system that was not properly assessed for regulatory risk before adoption.

What the diagnostic actually involves

A structured diagnostic for AI adoption is not a lengthy exercise. For most organisations, it involves:

  • Mapping the target workflow as it currently operates
  • Identifying where cost, time, or error is actually concentrated
  • Assessing whether the identified problem is amenable to AI intervention
  • Evaluating the specific AI capability against the specific requirement
  • Stress-testing the proposed solution against the regulatory and operational context

This takes days, not months. It is the work that prevents months of expensive remediation.

The Acuity AI position

We describe our methodology as diagnose before prescribe — not as a positioning statement, but as an accurate description of what we do. Every engagement begins with a diagnostic. The tool, if we recommend one at all, comes at the end.

This is not a slow approach. It is a fast approach to the right outcome.

sme strategyai governance