← Insights
·4 min read

Corporate Archaeology: Why Most Organisations Are Running on Inherited Systems

G

Ger Perdisatt

Founder, Acuity AI Advisory

The processes and systems most organisations rely on were designed for conditions that no longer exist. AI deployment onto those foundations doesn't transform them — it accelerates them. That is sometimes a problem.

Every organisation carries infrastructure it did not choose. Approval workflows designed when a different person was responsible for a function that no longer exists in the same form. Reporting structures built around a strategic priority that has since changed. Technology systems procured for requirements that have evolved, maintained because replacement is too disruptive, integrated with other systems in ways that make removal complicated.

This is normal. It is also underestimated as a barrier to AI adoption.

When we work with organisations on AI strategy, what we find — consistently — is that the surface presenting problem (AI isn't working, productivity hasn't improved, the project hasn't delivered) sits on top of a deeper structural problem: the organisation is running on systems and processes that were designed for conditions that no longer apply. The AI doesn't know this. It operates on what is there, not on what was intended.

What corporate archaeology actually means

The term is deliberately chosen. In an archaeological excavation, you uncover layers of what has come before — not as artefacts to preserve, but as evidence of how past decisions have shaped the current landscape. Some layers are load-bearing. Others are simply accumulated sediment.

Most organisations have not done this work on themselves. They have added layers — technology, process, structure — without systematically questioning what is beneath. The result is an operational environment shaped as much by historical accident as by deliberate design.

The specific patterns we observe:

Approval chains that outlast their original rationale. A five-step approval process for a category of expenditure may have been appropriate when the organisation had a different risk profile, a different financial position, or a different trust relationship between levels of management. Those conditions may have changed. The process usually hasn't.

Meetings that substitute for absent governance. Where decision rights are unclear or where accountability is diffuse, meetings fill the gap. They persist not because they are the most effective mechanism for the decisions they host, but because nobody has redesigned the governance that would make them unnecessary.

Technology integrations that constrain future options. Systems procured at different points in the organisation's history are integrated in ways that made sense at the time and now constrain the options available for modernisation. Replacing any one system requires understanding its dependencies — which frequently are not documented.

Reporting that reflects what was asked for, not what is useful. Management information frameworks are built to answer the questions that mattered when they were designed. They often fail to surface the information that would be most useful for current decisions, because nobody has reviewed the framework against current priorities.

Why this matters for AI deployment

AI operates on what the organisation has, not on what it intended to build. A machine learning model trained on historical data from a flawed approval process learns the flawed process. A language model deployed to assist with document retrieval retrieves documents from the document environment that exists — including outdated policies, superseded versions, and abandoned project artefacts.

More significantly: AI-assisted automation of an inherited process does not improve the process. It makes it faster. If the process is poorly designed, faster execution of a poorly designed process produces more errors at higher speed. This is not a hypothetical risk. It is the outcome of AI deployment into organisations that have not first examined what they are deploying AI on top of.

Surfacing the archaeology before deploying the technology

The practical implication is a diagnostic step before AI deployment — not a technology audit, but a process audit. What are the primary processes that AI is intended to improve? Where did those processes come from? Do they still reflect current organisational requirements, or have they accumulated in their current form through inertia rather than design?

This is not a lengthy exercise in most cases. For a defined deployment domain — say, procurement approval or document review — the archaeology can be surfaced in a focused workshop with the people who actually work in those processes. They know where the inherited constraints are. They are often waiting for someone to ask.

The organisations that do this before AI deployment make better deployment decisions — and avoid the specific failure mode of building sophisticated AI capability on top of a process that should have been redesigned rather than accelerated.

The AI strategy work we do with Irish organisations treats corporate archaeology as a standard diagnostic step. The findings consistently explain patterns that organisations have previously attributed to the wrong causes.


Contact Acuity AI Advisory to discuss a pre-deployment diagnostic that surfaces the inherited constraints most likely to affect your AI investment outcomes.

sme strategy