AI Risk FAQ
What AI risks should boards know about?
Quick answer
Boards should understand five AI risks: EU AI Act compliance risk (non-compliance fines up to 7% of global turnover); decision accountability risk (who is liable when AI-influenced decisions cause harm); data exposure risk (client and sensitive data in AI tools); operational risk (AI errors affecting business outcomes); and reputational risk (AI failures becoming public). The board’s role is to set risk appetite, ensure management has identified and mitigated these risks, and receive regular reporting. Directors who are not aware of these risks cannot exercise meaningful oversight.
The five AI risks boards must understand
EU AI Act compliance risk is the most pressing for the near term: the August 2026 enforcement deadline means that organisations using or planning to use high-risk AI must have compliance measures in place within a short timeframe. Non-compliance can result in fines of up to €35 million or 7% of global annual turnover for the most serious breaches. Decision accountability risk addresses a fundamental question that AI creates: when an AI-influenced decision causes harm — an AI-assisted employment decision that discriminates, an AI-generated client communication that contains errors — who is legally liable? The answer under the EU AI Act is the deployer: the organisation using the AI. Boards must ensure that management has accountability structures in place, not just AI tools. Data exposure risk is the most prevalent: organisations are routinely entering client data, personal data, and confidential information into AI tools without adequate governance. Operational risk is the probability that AI errors affect business performance. Reputational risk is the probability that AI failures become public.
The board governance questions for each risk
For each of the five risks, the board has specific governance questions to ask of management. On compliance risk: have we assessed our EU AI Act obligations, and are we on track to meet the August 2026 deadline? What high-risk AI systems are we using or planning to use, and what is the compliance status of each? On accountability risk: do we have a clear accountability structure for AI decisions — a named AI lead with defined responsibilities? On data exposure risk: do we have an AI use policy that addresses what data can and cannot enter AI tools, and is it being adhered to? On operational risk: what monitoring is in place to detect AI performance degradation, and what is the incident response process? On reputational risk: what is our response plan if an AI failure becomes public, and has it been tested? A board that asks these questions regularly — and receives substantive management responses — is exercising meaningful AI governance oversight.
Acuity AI Advisory delivers board AI briefings that equip directors to ask the right governance questions and oversee AI risk effectively. See our board AI advisory services.