EU AI Act FAQ
What is the EU AI Act?
Quick answer
The EU AI Act is the world's first comprehensive legal framework for artificial intelligence. It classifies AI systems into risk tiers — prohibited, high-risk, limited-risk, and minimal-risk — and imposes obligations proportionate to the risk. High-risk AI systems require conformity assessments, technical documentation, human oversight mechanisms, and transparency obligations. Prohibited practices are banned outright. The Act applies to any organisation that develops, deploys, or uses AI systems within the EU — including Irish businesses. Enforcement begins August 2026.
The EU AI Act's risk tier structure
The EU AI Act organises AI systems into four risk tiers, each with different obligations. Prohibited AI practices are banned entirely — they include social scoring, subliminal manipulation, real-time public biometric categorisation (with narrow exceptions), and certain predictive policing tools. These prohibitions came into force in February 2025. High-risk AI systems — those used in employment, credit, healthcare, education, law enforcement, critical infrastructure, and similar sensitive contexts — face the Act's most demanding requirements: conformity assessments, technical documentation, human oversight, audit logs, and transparency obligations. Limited-risk AI systems, including chatbots and systems that generate synthetic content, face targeted transparency requirements: disclosure obligations and labelling requirements. Minimal-risk AI systems — the majority of consumer AI applications — have no specific obligations under the Act, though the AI Office can issue codes of practice that encourage voluntary compliance measures.
Who the EU AI Act applies to
The EU AI Act applies based on geography and role, not on where an organisation is incorporated. Any organisation that places an AI system on the EU market, puts an AI system into service within the EU, or uses an AI system within the EU is within scope. This means Irish businesses using AI tools developed by US companies are within scope as deployers. It means Irish companies that develop AI for internal use are within scope as providers. And it means non-EU organisations that supply AI systems used in Ireland are within scope as importers or providers. The Act creates four categories of regulated actor: providers (those who develop or have AI developed), deployers (those who use AI in a professional context), importers (those who import AI from outside the EU), and distributors. Most Irish organisations are primarily deployers — and deployers have significant compliance obligations under the Act, particularly for high-risk AI systems.
Acuity AI helps Irish organisations understand and meet their EU AI Act obligations before the August 2026 deadline. See our EU AI Act compliance services.