Boards are being asked to govern AI without being given the tools to do it. These ten questions give directors a framework for meaningful AI oversight — without needing to become technologists.
The EU AI Act places accountability for AI governance on the organisations that deploy AI systems — which means, ultimately, on boards. Directors cannot fulfil that accountability without the information to exercise it. These ten questions are designed to give Irish boards a practical framework for interrogating AI use in their organisations.
1. What AI systems are currently in use across the organisation?
The starting point is a complete inventory. Most boards are surprised by how extensive AI use is — not because management has been concealing it, but because AI has been adopted piecemeal across departments without central visibility. If management cannot answer this question with a documented list, the first governance priority is creating one.
2. Which of those systems are classified as high-risk under the EU AI Act?
The EU AI Act's Annex III lists categories of high-risk AI. HR tools, credit decisioning systems, customer-facing automated decisions, and AI used in regulated industries are all candidates. Boards need to know which systems carry high-risk obligations and what the organisation is doing to meet them.
3. Who is accountable for each AI system's performance and compliance?
Accountability needs to be named, not collective. Each AI system in use should have an identified owner responsible for its performance, oversight, and compliance. If accountability is diffuse, it is effectively absent.
4. What is the process for reviewing AI before deployment?
New AI tools should not be deployed without a structured review — of the vendor, the data, the use case, and the associated risks. Boards should understand what that process looks like and whether it is being applied consistently.
5. How are AI-driven decisions being monitored after deployment?
Deploying an AI system and monitoring it are different things. The EU AI Act requires ongoing human oversight of high-risk systems. Boards should ask how AI outputs are being checked, how errors are identified, and what happens when an AI system produces a wrong or harmful outcome.
6. What data are our AI systems using, and where does it come from?
AI systems are only as reliable as the data they are trained on. Boards should understand whether AI systems are using data that is current, representative, and appropriate for the decisions being made — and whether there are data quality or bias risks that have not been assessed.
7. Have we assessed the liability implications of AI errors?
If an AI system makes a decision that harms a customer, employee or third party — who is liable? This question is increasingly litigated in European courts. Boards need to understand their organisation's exposure and ensure appropriate governance structures, documentation and human oversight are in place to manage it.
8. What is our vendor's position on AI Act compliance?
Most AI systems are purchased from third-party vendors. Boards should understand what their vendors have committed to under the EU AI Act — specifically, what documentation, technical support and conformity assessments they provide. Deployer obligations cannot be fully transferred to a vendor, but vendor commitments are a material part of the compliance picture.
9. How are employees being trained on responsible AI use?
AI governance is not only a board-level concern. Employees using AI tools in their daily work need to understand the boundaries of appropriate use, the risks of over-reliance on AI outputs, and the obligation to apply human judgement. Boards should ask whether AI literacy and responsible use training is in place.
10. What does our AI governance look like in twelve months?
The EU AI Act's key enforcement deadlines fall in 2025 and 2026. Boards should be asking management not just about current AI governance, but about the programme of work underway to reach compliance. If there is no programme, that is itself a governance finding.
If your board is not yet having this conversation, the time to start is now. Ireland's AI Office becomes operational in August 2026. Acuity AI Advisory provides board briefings, governance frameworks and director education designed specifically for Irish boards navigating AI oversight.