Healthcare Sector
AI Compliance Audit for Hospitals
Clinical AI carries the highest risk classification under the EU AI Act. Hospitals deploying diagnostic AI, clinical decision support, or AI-embedded medical devices need a clear picture of their obligations before the August 2026 enforcement deadline.
Request the auditTL;DR
An AI compliance audit for hospitals maps clinical decision support, diagnostic AI, administrative AI, and embedded AI in medical devices against the EU AI Act's risk tier framework. For Irish hospitals, the Health Products Regulatory Authority is the designated sectoral regulator — and patient safety obligations overlap directly with EU AI Act human oversight requirements.
What an AI compliance audit means for a hospital
Hospitals are amongst the most complex AI compliance environments in the EU AI Act framework. Clinical AI — diagnostic imaging analysis, sepsis prediction algorithms, clinical pathway recommendation tools, AI-assisted triage — falls into the Act's high-risk category. Administrative AI — staff scheduling, HR screening, resource allocation tools — may also be high-risk depending on the decisions they influence. And AI embedded in medical devices carries a dual regulatory burden: the EU Medical Devices Regulation and the EU AI Act apply simultaneously.
An AI compliance audit for a hospital begins with a comprehensive inventory: every AI system in clinical and administrative use, including AI embedded in platforms procured by different departments over different procurement cycles. Many hospitals will find that their AI footprint is larger and more varied than their IT function is aware of — embedded AI in PACS systems, outpatient scheduling tools, and electronic patient record platforms adds up quickly.
The audit then classifies each system against the EU AI Act's risk framework, maps the specific obligations that apply, assesses the current state of compliance, and produces a prioritised action list. Where patient safety obligations overlap with AI Act oversight requirements — as they frequently do for clinical AI — the audit addresses both in a single assessment.
What the audit produces
- Clinical and administrative AI systems register — complete inventory
- EU AI Act risk tier classification for each system
- HPRA oversight obligations mapped per system
- Medical device AI overlap assessment (MDR vs EU AI Act)
- Human oversight gap analysis for clinical AI outputs
- Administrative AI governance assessment (HR, scheduling, procurement AI)
- Prioritised remediation action list with accountability assignments
EU AI Act and healthcare: what Irish hospitals need to know
The EU AI Act's high-risk classification for healthcare AI is not aspirational — it is the live regulatory framework from August 2026. For hospitals, high-risk obligations include: conformity assessments before deploying new AI systems, mandatory human oversight of AI-assisted clinical decisions, logging requirements for auditability, transparency obligations to patients regarding AI involvement in their care, and post-market surveillance and incident reporting.
Where AI is embedded in a medical device regulated under the MDR, the intersection is particularly important. HPRA-approved medical devices with AI components may need to satisfy both regulatory frameworks. The audit maps this overlap explicitly — neither framework can be treated as a substitute for the other.
The HPRA as sectoral regulator will have enforcement powers from August 2026. Hospitals that cannot demonstrate awareness of their AI systems, their risk classification, and their compliance obligations are exposed — both to enforcement action and to the patient safety implications of ungoverned clinical AI.
Why Acuity AI Advisory
Acuity AI Advisory approaches hospital AI audits from an operational perspective — how AI is actually deployed across clinical and administrative functions, not how vendors describe it in procurement documentation. Ger Perdisatt, former COO of Microsoft Western Europe, brings direct experience of AI deployment at scale in complex organisations.
The audit is vendor-neutral. Acuity AI has no commercial relationship with AI vendors and no interest in the technology choices the hospital makes after the audit. The findings are grounded in what the regulatory framework requires and what good governance looks like in a healthcare setting — not in what subsequent commercial relationships would reward.
The audit fee is fixed and confirmed at scoping. No open-ended proposals, no retainers, no variation. The output is a written document — not a presentation — that the hospital's board and clinical governance structures can act on directly.
Questions
Common questions
Is AI in clinical decision support high-risk under the EU AI Act?
Yes. AI systems used in clinical decision support — including diagnostic AI, triage AI, AI that recommends treatment pathways, and AI embedded in medical devices — are classified as high-risk under the EU AI Act. High-risk classification triggers a specific set of deployer obligations: conformity assessments before deployment, human oversight requirements, logging and auditability obligations, incident reporting procedures, and transparency requirements for patients and clinical staff. For hospitals, these obligations sit alongside existing medical device regulation obligations — they do not replace them.
Who regulates AI in Irish hospitals?
The Health Products Regulatory Authority (HPRA) is the designated sectoral regulator for AI in Irish healthcare under the EU AI Act. The HPRA already regulates AI-enabled medical devices under the EU Medical Devices Regulation. For clinical AI that does not fall under the MDR, the HPRA's remit under the AI Act extends oversight to cover risk classification, conformity requirements, and post-market surveillance obligations. Hospitals must understand both regulatory frameworks — and where they overlap.
What should a hospital AI governance policy include?
A hospital AI governance policy should address: the process for evaluating and approving new AI tools before clinical deployment; risk classification under the EU AI Act for each system in use; human oversight requirements for clinical AI outputs; patient rights regarding AI-assisted decisions; incident reporting and post-market surveillance obligations; staff training requirements for clinical AI tools; and the governance structure with accountability assigned at board or executive level. The policy must be grounded in clinical risk management, not just IT policy conventions.
Request an AI Compliance Audit for Your Hospital
Fixed-fee. Written findings. Vendor-neutral.
Get in touch