Guide

The EU AI Act: What Irish Organisations Need to Know

The EU AI Act is the world's first comprehensive AI law. High-risk AI system requirements apply from August 2026 — the same month Ireland's AI Office becomes operational and enforcement begins in earnest.

August 2025GPAI model obligations begin
February 2026Prohibited practices enforced
August 2026High-risk AI requirements apply

What is the EU AI Act?

The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. Adopted in 2024 and now in force, it applies across all EU member states — including Ireland — without the need for separate national legislation.

The Act takes a risk-based approach. Rather than regulating AI technology as such, it regulates AI applications based on the harm they could cause. Systems that affect people's access to employment, credit, essential services or safety face the strictest requirements. Systems used for general productivity tasks face little or no specific obligation.

Critically, the Act applies to organisations that use AI — not just those that build it. If your organisation deploys an AI system that falls into a high-risk category, you have obligations as a deployer regardless of whether you developed the system yourself or purchased it from a third party.

This means the Act is not just a technology industry concern. Irish businesses in financial services, legal, HR, healthcare and the public sector are all in scope for significant obligations — many without yet knowing it.

Does it apply to Irish businesses?

Yes — across sectors. The EU AI Act applies to any organisation placing AI systems on the EU market or putting them into service within the EU, and to deployers operating within the EU. Headquartering outside the EU does not exempt an organisation if its AI systems are used within EU territory.

Financial Services

Credit scoring, fraud detection, AML systems, insurance pricing and customer-facing AI are all high-risk categories under the Act. The Central Bank of Ireland's own AI guidance runs alongside EU obligations.

Legal

Document review, due diligence AI, legal research tools and predictive analytics used in legal workflows. Law Society Ireland has published guidance noting solicitors' responsibilities when AI is used in client-facing work.

HR and Recruitment

CV screening tools, performance management systems and AI-assisted redundancy or promotion decisions are explicitly high-risk. Any organisation using automated HR decision-making needs a conformity assessment.

Public Sector

State bodies and local authorities using AI for resource allocation, benefits assessment or permit decisions are directly in scope. Public sector AI draws additional scrutiny from the Act and from Ireland's AI Office.

Healthcare

Clinical decision support, patient triage and diagnostic AI tools fall within high-risk categories. Obligations here overlap with medical device regulation and data protection requirements.

The risk tiers

The Act classifies AI systems into four tiers. Your obligations depend entirely on which tier your systems fall into — and on whether you are a provider (who develops the AI) or a deployer (who uses it in a professional context).

Unacceptable Risk

Banned
  • Social scoring systems by public authorities
  • Real-time biometric surveillance in public spaces (with limited exceptions)
  • AI that exploits psychological vulnerabilities
  • Subliminal manipulation of behaviour

High Risk

Conformity requirements by Aug 2026
  • HR and recruitment — CV screening, performance management
  • Credit scoring and lending decisions
  • Critical infrastructure management
  • Law enforcement support systems
  • Access to essential public services

Limited Risk

Transparency obligations
  • Customer-facing chatbots
  • Deepfake or AI-generated content
  • Emotion recognition systems

Minimal Risk

No specific obligations
  • AI-enabled productivity tools (drafting, summarisation)
  • Spam filters and recommendation engines
  • Most general-purpose AI tools in business use

Key deadlines for Irish organisations

The Act's obligations are being phased in over a 36-month period. The final and broadest phase — covering high-risk AI systems — arrives in August 2026.

August 2025

GPAI model obligations begin

Providers of general-purpose AI models must comply with transparency and copyright documentation requirements.

February 2026

Prohibited practices enforced

Unacceptable-risk AI — including social scoring systems and real-time biometric surveillance in public spaces — is banned across the EU.

August 2026

High-risk AI requirements apply

Full obligations for high-risk AI systems take effect. Ireland's AI Office becomes operational and enforcement begins in earnest.

Where to start

Most Irish organisations are not starting from a blank sheet. You already use AI — the question is whether you know which systems are in scope and what obligations they trigger. A practical readiness process runs in four steps.

01

Inventory your AI systems

List every AI tool, platform, or automated decision-making system in use across your organisation — including those procured by individual departments without central IT involvement. Shadow AI adoption is common and is not exempt from the Act.

02

Classify by risk tier

Map each system against the Act's four risk categories. The classification depends on the function the AI performs, not the technology used. An off-the-shelf HR tool that screens CVs is high-risk regardless of who built it.

03

Identify high-risk obligations

For any high-risk system, determine which conformity requirements apply to you as a deployer. These include human oversight mechanisms, data governance, transparency to users, and documented record-keeping for regulatory inspection.

04

Build your remediation roadmap

Prioritise actions by deadline and regulatory exposure. Not everything needs to be solved at once — but high-risk systems with no current oversight documentation are the most urgent gap to close before August 2026.

Need a readiness review?

Acuity AI Advisory conducts structured EU AI Act readiness reviews — inventory, classification, gap analysis and a remediation roadmap. Fixed-fee, vendor-neutral, typically completed in two to three weeks.

Learn more

Common questions

Does the EU AI Act apply to Irish businesses?

Yes. The EU AI Act applies to any organisation that uses, develops, imports or deploys AI systems within the EU — regardless of where the organisation is headquartered. Irish businesses using AI in HR, credit decisioning, customer-facing tools, document processing or any regulated function are in scope. Being based in Ireland does not exempt you from the Regulation.

When does the EU AI Act enforcement begin in Ireland?

Enforcement is phased. Prohibited AI practices (unacceptable risk) have been banned since February 2026. GPAI model obligations began in August 2025. High-risk AI system requirements apply from August 2026, when Ireland's AI Office also becomes fully operational.

What is high-risk AI under the EU AI Act?

High-risk AI systems include those used in HR and recruitment (CV screening, performance management), credit scoring and lending decisions, critical infrastructure, law enforcement support, biometric identification, and access to essential services. Deployers of high-risk AI must meet conformity assessment, transparency, and human oversight requirements by August 2026.

What should Irish organisations do now to prepare for the EU AI Act?

The four immediate steps are: (1) inventory all AI systems currently in use across your organisation, (2) classify each by the Act's risk tier, (3) identify which high-risk obligations apply to your situation, and (4) build a remediation roadmap prioritised by deadline and exposure. Organisations that have not started should begin before Q3 2026 — enforcement is active from August.

Book an EU AI Act Readiness Review

Fixed-fee. Vendor-neutral. Typically completed in two to three weeks. Start with a conversation.