← Insights
·8 min read

Article 4 of the EU AI Act: The AI Literacy Obligation Irish Employers Are Ignoring

G

Ger Perdisatt

Founder, Acuity AI Advisory

Article 4 of the EU AI Act has required organisations to ensure staff have sufficient AI literacy since February 2, 2025. Most Irish employers have done nothing. Here is what the obligation actually requires and how to demonstrate compliance.

Most of the attention on the EU AI Act focuses on high-risk AI systems, prohibited applications, and the August 2026 enforcement deadlines. Those obligations matter. But there is a requirement that has been in force since February 2, 2025 — more than a year ago — that most Irish employers have not addressed at all.

Article 4 of the EU AI Act imposes a direct obligation on every organisation that uses AI systems to ensure their staff have sufficient AI literacy. Not aspirationally. Not eventually. It has been a legal requirement for over a year.

This post is for HR leaders, operations directors, and business owners who need to understand what Article 4 actually requires, what "sufficient" means in practice, and how to document your compliance position before enforcement capacity arrives in Ireland.

Is AI literacy training mandatory for employees in Ireland?

Yes — with a qualification on what "training" means.

Article 4 of the EU AI Act does not prescribe a training programme, a minimum number of hours, or a certification standard. What it requires is that operators — organisations that use AI systems — take measures to ensure their staff possess sufficient AI literacy to understand and work with AI systems relevant to their role.

The obligation applies to all organisations that deploy AI systems. That includes any Irish business using Microsoft Teams with Copilot features, ChatGPT or similar generative AI tools, AI-assisted recruitment platforms, scheduling software with predictive features, customer service chatbots, or automated document processing tools.

If your organisation uses any of these — which describes the vast majority of Irish employers today — Article 4 applies to you.

What does Article 4 of the EU AI Act actually require employers to do?

The text of Article 4 is short but consequential. It requires providers and operators to take measures to ensure, to the best of their ability, that their staff and any other persons dealing with AI systems on their behalf have sufficient AI literacy, taking into account their technical knowledge, experience, education, and training, as well as the context in which the AI systems are used.

Breaking that down into practical obligations:

Assess current AI literacy levels. You cannot demonstrate sufficiency without a baseline. This means understanding what AI systems your staff use, in what contexts, and what they currently know about how those systems work, where they fail, and what their limitations are.

Identify gaps by role and risk level. The obligation is proportionate to role. A finance director using Copilot to draft board reports has different literacy requirements than a recruiter using an AI-assisted screening tool — which is classified as high-risk under the Act and carries significantly greater risk of harm. Literacy requirements should be mapped to actual exposure, not applied uniformly across the organisation.

Take measures to address those gaps. The Act does not specify what measures are required. Internal training sessions, structured learning programmes, documented onboarding procedures for AI tools, and formal assessments all count — provided they are calibrated to the actual risk and context.

Document what you have done. Compliance is demonstrated through evidence, not intention. An organisation that has run training but kept no records is in a weaker position than one with a documented literacy programme, even if the underlying activity was similar.

Do SMEs have to comply with Article 4?

Yes. The proportionality principle in Article 4 affects the scale of the programme required, not whether the obligation applies.

The Act explicitly states that literacy measures should take into account the context in which AI systems are used. A ten-person professional services firm using ChatGPT for research is not expected to run the same programme as a five-hundred-person financial services organisation using AI for credit decisions.

But both are expected to do something. Both are expected to be able to demonstrate that the staff handling AI systems understand their role-relevant risks, limitations, and outputs.

For SMEs, a proportionate response might be:

  • A documented AI tools register listing what is in use and by whom
  • A brief structured onboarding process when a new AI tool is introduced
  • Role-specific guidance on known limitations of tools already in use
  • A light-touch annual review to capture changes in the AI landscape

This is not a significant burden. The risk lies in doing nothing, which — as enforcement capacity grows — becomes an increasingly exposed position.

What counts as sufficient AI literacy?

This is where the obligation creates both risk and opportunity, because the Act does not define a standard.

"Sufficient" is contextual. It depends on:

  • The AI systems being used — general productivity tools carry lower literacy requirements than systems making consequential decisions about people
  • The role of the individual — a user of an AI output has different requirements from someone configuring or overseeing an AI system
  • The risk level of the application — staff involved in high-risk AI deployment (recruitment, credit assessment, safety-critical processes) face higher literacy requirements than staff using minimal-risk tools

In the absence of a defined standard, organisations should work backwards from the question a regulator would ask: "If something went wrong with this AI system, would the relevant staff have had sufficient understanding to identify and escalate the problem?"

Practically, sufficient AI literacy for a business user in most Irish organisations would include:

  • Understanding that AI outputs can be wrong, biased, or confidently incorrect
  • Knowing what data the tool uses and what privacy constraints apply
  • Understanding when human judgment must override AI output
  • Being able to recognise when an output warrants escalation or review
  • Knowing the organisation's policy for the AI tool in question

For staff involved in high-risk AI applications — particularly AI in recruitment or financial decision-making — the bar is materially higher and should include knowledge of the specific risks associated with the system, bias indicators to watch for, and documented escalation procedures.

How do I document AI literacy compliance?

Documentation is what turns a compliance activity into a compliance position. Without it, there is no evidence that the obligation was met.

A defensible documentation approach includes the following elements:

AI tools register. A current list of AI systems in use across the organisation, categorised by risk level and the roles that interact with them. This does not need to be complex — a maintained spreadsheet is sufficient.

Literacy assessment records. Evidence of how you assessed current staff literacy. This might be a structured quiz, a self-assessment survey, or documented conversations during performance reviews. The goal is to show you understood the gap before designing the response.

Training and awareness records. Evidence of what measures were taken, when, and for whom. Attendance records, completion certificates for digital learning, or signed acknowledgement of policy updates all serve this purpose.

Role-specific guidance documentation. Written guidance for staff who use AI tools — covering what the tool does, what it does not do, where errors are most likely, and how to escalate concerns. This can live in your HR policy library or your intranet.

Review records. Evidence that the programme is maintained and updated. AI literacy is not a one-off exercise — the landscape changes, new tools are introduced, and literacy programmes must keep pace.

Why the August 2026 deadline makes this more urgent

Article 4 has been enforceable since February 2025. But enforcement capacity in Ireland is arriving in stages.

The AI Office of Ireland becomes operational by 1 August 2026 — the same date that full compliance requirements for high-risk AI systems take effect. Organisations deploying high-risk AI must, by that date, have conformity documentation, human oversight mechanisms, and demonstrable compliance with all applicable Article requirements — including Article 4.

For organisations operating high-risk AI systems (which includes most employers using AI-assisted recruitment tools), the AI literacy obligation is not a standalone compliance exercise. It is part of a connected set of requirements that regulators will assess together.

Penalties for non-compliance with the EU AI Act reach up to €35 million or 3% of global annual turnover, whichever is higher. Article 4 non-compliance alone is unlikely to attract the highest tier of penalties, but it will be a visible marker of an organisation's overall compliance posture — and that posture will matter when regulators begin their work.

The practical starting point

Most Irish employers are not starting from zero. Many have done some form of AI awareness activity — a lunch-and-learn, an all-hands briefing, a note in the staff handbook. The gap is usually in documentation and in calibrating the programme to actual risk exposure.

The steps that move an organisation from informal activity to a defensible compliance position are:

  1. Build or update your AI tools register
  2. Map each tool to the roles that use it and the risk level it carries
  3. Assess current literacy levels against role-specific requirements
  4. Design proportionate measures to close the gaps — these do not need to be elaborate
  5. Document everything, including the review cycle

The organisations that will face the most exposure are those who have done nothing, documented nothing, and cannot demonstrate that they took the obligation seriously.


Acuity AI Advisory works with Irish organisations to assess their current AI literacy exposure, design proportionate compliance programmes, and build the documentation that demonstrates compliance to regulators.

If you are unsure where your organisation stands on Article 4 — or on broader EU AI Act compliance — contact us for a structured assessment. We will tell you what you actually need to do, not what looks impressive on paper.

eu ai actcomplianceireland