AI Risk FAQ

What is shadow AI risk?

Quick answer

Shadow AI refers to AI tools being used by employees without organisational awareness, oversight, or governance. As AI tools proliferate — including AI features embedded in everyday software — shadow AI has become endemic. The risk is not just that employees are using unsanctioned tools: it is that client data, confidential information, and sensitive business data may be entering AI systems without the organisation’s knowledge. Governing shadow AI requires an active inventory programme, not just a policy that prohibits unsanctioned AI use.

How shadow AI arises and why policy alone does not solve it

Shadow AI arises because AI tools are increasingly accessible, useful, and embedded in everyday workflows — and because the friction of using an approved tool versus an unapproved tool is often low. An employee who discovers that a free AI tool produces better results than the approved alternative will use the free tool, particularly if the policy prohibiting it is not clearly communicated or consistently enforced. AI features embedded in software the organisation already uses — AI writing tools in word processors, AI assistants in email clients, AI analysis tools in productivity software — create shadow AI without any deliberate decision: the employee is using the application they were given, which happens to have AI features that the organisation has not reviewed. A policy that says “do not use unsanctioned AI tools” does not address the AI features in sanctioned software. And employees who are unsure whether a tool is sanctioned tend to use it quietly rather than seek clarification — because seeking clarification risks having the useful tool prohibited.

Building an AI inventory to surface shadow AI

Surfacing shadow AI requires an active programme, not a passive policy. The most effective approach combines three elements. First, a structured staff survey: asking employees directly what AI tools they use, how they use them, and what data they enter into those tools. This approach works best when it is framed as a governance exercise rather than a surveillance exercise — the goal is to understand AI use so that the organisation can support it appropriately, not to identify policy violations. Second, a review of software in use: checking whether the applications already deployed in the organisation have AI features that have not been reviewed. This is often the most surprising step — the volume of AI embedded in standard business software has grown rapidly and is not always prominently disclosed. Third, an ongoing approval and review process: a lightweight mechanism for employees to flag AI tools they would like to use, receive a rapid governance assessment, and get a clear answer — which removes the incentive to use tools quietly without approval.

Acuity AI Advisory’s governance engagements include AI inventory work to surface shadow AI and establish an ongoing oversight process. See our AI governance services.