← Insights
·3 min read

The EU AI Act: What Irish Boards Need to Know Before August

G

Ger Perdisatt

Founder, Acuity AI Advisory

The EU AI Act is no longer a horizon event. The first compliance obligations are live, and Irish boards that have not yet begun a readiness assessment are already behind. Here is what matters, and what to do about it.

The EU AI Act came into force in August 2024. The phase-in schedule is well understood by compliance teams, but board-level awareness remains patchy — particularly among Irish organisations that are not yet thinking about AI as a regulatory matter.

This is the piece I would want every board member in Ireland to read before their next governance meeting.

What is already in force

The Act operates on a tiered schedule. As of February 2025, the first tranche of obligations is live: the ban on prohibited AI practices. These include AI systems that manipulate people through subliminal techniques, exploit vulnerabilities, engage in social scoring, or deploy real-time biometric surveillance in public spaces. Organisations using any such systems must have ceased doing so by now.

The next significant deadline is August 2025, when obligations for General Purpose AI (GPAI) models come into effect. If your organisation is deploying AI tools built on foundation models — which includes most commercially available AI products — you will need to understand what obligations apply to you as a downstream deployer.

August 2026 brings the high-risk AI system requirements into force. This is where the compliance burden becomes substantial for many organisations.

Why this matters to boards specifically

The EU AI Act places explicit obligations on operators — the organisations that deploy AI systems, not just those that build them. A board that approves or oversees the use of AI tools in HR, lending, credit assessment, recruitment, or employee monitoring may be overseeing high-risk AI systems without realising it.

Director liability is not the primary concern here. The concern is governance adequacy. A board that cannot answer basic questions about what AI systems the organisation operates, how they are risk-classified, and what oversight mechanisms are in place is not discharging its governance responsibilities in the current regulatory environment.

The questions boards should be asking management right now:

  • What AI systems are we currently using across the organisation?
  • Have any of them been assessed against the Act's risk classification framework?
  • Do we have a process for evaluating new AI tools before deployment?
  • Who owns AI governance, and is that clearly documented?

If management cannot answer these questions clearly and quickly, that is itself information the board needs.

The readiness assessment — where to start

A practical EU AI Act readiness assessment covers four stages:

1. Inventory. Build a complete picture of every AI system in use across the organisation. This includes tools purchased from vendors, AI features embedded in existing software (many organisations are surprised by how many there are), and any internally developed tools. The inventory needs to capture the system, its use case, the data it processes, and who owns it.

2. Risk classification. Apply the Act's four-tier framework — prohibited, high-risk, limited risk, minimal risk — to each system in the inventory. High-risk systems require the most attention: mandatory conformity assessments, human oversight mechanisms, and documentation obligations.

3. Gap analysis. For each high-risk system, assess your current compliance position against the Act's requirements. This will typically surface gaps in documentation, oversight mechanisms, and incident reporting processes.

4. Remediation roadmap. Prioritise the gaps by materiality and deadline proximity. Produce a structured roadmap with owners, timelines, and board reporting points.

This work does not need to be exhaustive before August 2025. But it needs to have started.

A note on vendor-neutral advice

One pattern I have seen repeatedly in Irish organisations is AI readiness work that is effectively led by a software vendor. The vendor has a legitimate commercial interest in a particular outcome — usually the adoption of their platform's compliance features. The readiness assessment, as a result, ends up scoped around tools the vendor supports.

This is not a substitute for independent AI governance advice. The inventory, risk classification and gap analysis stages require objectivity that a commercial vendor relationship makes structurally difficult to deliver.


If you are a board member or governance professional who wants to understand where your organisation stands, the first step is a straightforward diagnostic conversation — not a procurement decision. That is the work we do at Acuity AI Advisory, and it is vendor-neutral by design.

eu ai actboard advisory