The EU AI Act is in force. Most Irish directors cannot yet answer basic questions about what it requires of them. This is the plain English guide — no technical jargon, no vendor framing.
The EU AI Act became law in August 2024. It is the most significant AI regulation in the world and it applies to every organisation deploying AI systems within the European Union — including Irish companies that did not build any AI and have simply been using commercially available tools.
Most of what has been written about the Act is aimed at technology professionals. This piece is aimed at company directors who need to understand what the Act requires of them personally, without the technical overlay.
What the Act covers
The Act regulates artificial intelligence systems — tools that take inputs, process them using some form of machine-based logic, and produce outputs that influence decisions or actions. That definition covers a wider range of tools than many directors realise. It includes systems embedded in HR software, loan origination platforms, fraud detection tools, recruitment screening systems, and customer service automation. If your organisation uses any commercial software with AI-assisted features — and almost all organisations do — the Act is relevant to you.
It does not cover simple rules-based software that does not learn from data. But the distinction between AI systems and conventional software is narrower than it used to be, and most modern business software is drifting into scope.
The risk tiers
The Act classifies AI systems into four categories.
Prohibited practices are banned outright. These include AI systems that manipulate behaviour through subliminal techniques, exploit psychological vulnerabilities, or enable mass social scoring by public authorities. No Irish business should be operating anything in this category, but the IT inventory review the Act requires will occasionally surface edge cases.
High-risk systems carry the most significant compliance obligations. This category includes AI used in recruitment, employment decisions, credit scoring, access to essential services, critical infrastructure management, and certain safety-critical applications. If your organisation uses AI in HR decisions, lending, or infrastructure management, you are likely operating high-risk AI systems. The obligations include mandatory risk assessments, technical documentation, human oversight mechanisms, and registration requirements.
Limited risk systems require transparency — primarily disclosure obligations to users that they are interacting with an AI system. Chatbots and AI-generated content tools typically fall here.
Minimal risk systems carry no specific obligations beyond the general duty to deploy AI responsibly. Most AI productivity tools fall into this category.
What "deployer" means
The Act distinguishes between providers (those who develop AI systems) and deployers (those who put AI systems into use). Most Irish businesses are deployers, not providers. The obligation on deployers is not the same as on providers, but it is not trivial.
As a deployer, your organisation is required to use AI systems in accordance with the provider's instructions, implement human oversight where required, monitor performance, and report incidents to the relevant national authority where required. For high-risk AI systems, deployers have additional obligations including conducting fundamental rights impact assessments and ensuring adequate staff training.
The key point for directors: the Act does not exempt you because you bought the AI from a vendor. You are responsible for how you deploy it.
What directors are personally accountable for
The Act places obligations on organisations, not directly on individual directors. But director liability in Irish company law is engaged when an organisation breaches regulatory requirements due to governance failure — particularly where the board was in a position to put appropriate oversight in place and failed to do so.
Directors of regulated entities in financial services, healthcare, and critical infrastructure face the most direct exposure, but the principle applies broadly. A board that has not asked basic questions about AI governance — what systems are in use, how they are classified, what oversight exists — is not discharging its duties in the current environment.
The key deadlines
The Act has been phasing in since August 2024. The deadline that matters most for most Irish organisations is August 2026, when high-risk AI system obligations become fully enforceable. That deadline is closer than it appears. The work required — inventory, risk classification, gap analysis, documentation, oversight framework — takes time to do properly. Organisations that have not started should start now.
The General Purpose AI model obligations (affecting deployers of tools built on large language models) came into effect in August 2025.
We advise Irish boards and directors on EU AI Act compliance, from initial readiness assessment to full governance framework implementation. Our advice is independent of any AI vendor. Contact us to understand your current position.