AI Governance FAQ
How do you audit AI systems?
Quick answer
Auditing an AI system involves reviewing: the inputs it uses (data quality, data sources, potential bias), the outputs it produces (accuracy, consistency, appropriate outputs), the decisions it influences (whether human oversight is operating as intended), the documentation that governs its use (policy, risk assessment, training records), and compliance with applicable regulatory requirements. For high-risk AI systems under the EU AI Act, audit is not optional — the Act requires organisations to maintain audit logs and technical documentation.
The five dimensions of an AI system audit
A thorough AI system audit addresses five dimensions. Inputs: review the data the AI system uses, including its quality, sources, and potential for bias. Is the training data representative? Are there gaps or distortions that could lead to systematically wrong or discriminatory outputs? Outputs: review what the AI system produces — its accuracy, consistency, appropriateness for its stated purpose, and whether it is behaving as expected. Decisions: review how AI outputs are used in decision-making. Is human oversight operating as designed? Are AI recommendations being accepted uncritically or being properly reviewed? Documentation: review the governance documentation — the AI use policy, the risk assessment, training records for staff who use the system, and vendor contracts. Is the documentation current, complete, and adequate? Compliance: review whether the AI system's operation complies with applicable requirements — the EU AI Act, GDPR, sectoral regulations, and the organisation's own policies.
EU AI Act audit requirements
For high-risk AI systems under the EU AI Act, audit is not a discretionary activity — it is a legal requirement. The Act requires providers and deployers of high-risk AI to maintain audit logs: automatic records of the system's operation that allow for post-hoc review of its activity and outputs. It requires technical documentation that describes the system's design, development, testing methodology, and performance characteristics. It requires that the risk management system be reviewed and updated regularly throughout the system's lifecycle — not just at deployment. And it requires that human oversight mechanisms be tested to ensure they are actually operating as intended, not just nominally in place. For Irish organisations, this means that AI audit needs to be built into governance as a regular, documented activity — not conducted once and forgotten.
Acuity AI conducts AI governance audits for Irish organisations ahead of EU AI Act enforcement. See our AI governance services.