Irish employers preparing for the EU Pay Transparency Directive are reaching for AI tools to do the heavy lifting. What most haven't realised: those same AI tools may trigger a second, separate set of obligations under the EU AI Act. One compliance effort can silently create another.
There is a compliance problem developing in Irish organisations that almost no one has written about yet. It sits at the intersection of two EU regulations — the Pay Transparency Directive and the EU AI Act — and it will catch employers who are trying to do the right thing.
The scenario goes like this. An Irish employer, alert to the Pay Transparency Directive, begins preparing. They engage an HR software platform to analyse their pay data, model job evaluations, and set pay bands. The platform uses AI. The employer now has a compliance solution for the Directive. What they may not realise is that they have also deployed a high-risk AI system under the EU AI Act — and triggered a separate compliance obligation they almost certainly have not assessed.
This is the dual compliance trap.
Where the Pay Transparency Directive stands right now
The EU Pay Transparency Directive required transposition into Irish law by 7 June 2026. The Irish Government has not met that deadline. This matters less than many employers assume.
EU Directives do not become irrelevant because a member state fails to transpose them on time. From the transposition date, the burden of proof in pay discrimination cases shifts to the employer — regardless of whether Irish implementing legislation exists. Courts and the Workplace Relations Commission can reference the Directive's provisions directly when interpreting employment equality law. Employers who believed they had more time because of the Government's delay are wrong.
The practical requirements are clear: publish pay ranges in job advertisements, provide pay information to employees on request, build job architectures on objective and gender-neutral criteria, conduct pay equity analyses, and report pay gaps at scale. For organisations that have not started this work, time is genuinely short.
Where the EU AI Act stands
The EU AI Act has been progressively coming into force since August 2024. The provisions most relevant to employers — those governing high-risk AI systems in employment — apply from 1 August 2026.
The Irish AI Office, which is the national market surveillance authority for the AI Act, becomes operational on the same date. Enforcement begins in less than four months.
High-risk AI in employment is defined in Annex III of the AI Act. It covers AI systems used to make or materially influence decisions about:
- Recruitment and selection of candidates
- Promotion and career advancement
- Pay-setting and determination of terms and conditions
- Performance monitoring and evaluation
- Contract termination and workforce reduction
The compliance obligations for deployers of high-risk AI are demanding. They include: conducting a conformity assessment of the AI system before deployment; implementing human oversight mechanisms — not nominal oversight, but genuine capacity to review and override AI outputs; maintaining audit logs and documentation; testing for bias across protected characteristics; and keeping records available for regulatory inspection.
These obligations apply to every organisation that deploys a high-risk AI system. You do not need to have built the system. You need to be using it.
The AI tools you are already using for pay transparency
Here is where the trap closes.
The HR software market has moved rapidly. Payroll analytics platforms, compensation benchmarking tools, job evaluation software, and ATS systems have embedded AI into their core functions. Many of these AI features operate quietly in the background — flagging pay anomalies, producing draft pay band recommendations, scoring job descriptions against evaluation frameworks, ranking roles by complexity.
If any of these features touch pay-setting, promotion, or recruitment decisions, they are candidates for high-risk classification under the EU AI Act.
Consider the most common tools Irish employers are deploying for pay transparency preparation:
Pay equity and gap analysis platforms. Tools that analyse remuneration data to identify unexplained pay gaps. If the AI produces recommendations about how to adjust pay bands or remediate gaps — recommendations that influence actual pay decisions — this is not passive reporting. It is AI materially influencing pay-setting decisions.
Job evaluation software. Platforms that use AI to assess roles against evaluation criteria and assign grade levels. Grade assignment is the foundation of pay band allocation. AI involvement in grading is AI involvement in pay-setting.
Compensation benchmarking tools. Systems that use AI to recommend salary ranges by role based on market data. If those recommendations feed into your published pay bands, the AI has influenced a pay decision.
ATS and HR information systems with embedded AI. Many ATS platforms now include AI-driven features for shortlisting, candidate scoring, or performance flagging. These are explicitly within the AI Act's high-risk scope for recruitment and career advancement.
The compliance logic nobody is joining up
An employer who deploys these tools to achieve Pay Transparency Directive compliance may simultaneously be deploying unassessed high-risk AI under the EU AI Act. The two obligations have entirely different owners. Pay transparency sits with HR and the board. AI Act compliance sits with whoever is accountable for technology governance — if anyone has been assigned.
The result is that the compliance effort for one regulation creates exposure under the other.
This is not a theoretical risk. From 1 August 2026, the AI Office of Ireland will be operationally capable of receiving complaints and conducting investigations. The AI Act provides for fines of up to €15 million or 3% of global annual turnover for deployers of non-compliant high-risk AI systems. The first enforcement actions are likely to be triggered by visible compliance failures — and an employer publicising its pay transparency efforts while running unassessed high-risk AI would be an obvious candidate.
What the EU AI Act actually requires you to do
For employers deploying AI tools that touch pay, promotion, or recruitment decisions, the obligations as deployer are as follows.
Verify the system's classification. Ask your vendor directly whether their product has been assessed for AI Act compliance. Request the technical documentation and EU declaration of conformity for any AI system they claim is compliant. Do not accept marketing assurances — request documentation.
Conduct your own assessment. Even where a vendor has conducted a conformity assessment, you as deployer must assess how you are using the system. The same tool can be high-risk in one deployment context and not in another, depending on the decisions it influences.
Implement human oversight. Designate specific individuals with the authority, training, and practical capacity to review and override AI outputs before they influence pay or hiring decisions. Document who this is, what their remit covers, and what the override mechanism looks like in practice.
Establish an audit trail. Maintain logs of what the AI system produced, what human review occurred, and what decision was ultimately made. This documentation is what a regulator will ask for. If you cannot produce it, you cannot demonstrate compliance.
Test for bias. AI systems used in pay and employment decisions must be tested for discriminatory outputs across protected characteristics. Vendor testing is not sufficient — you need to understand whether the system produces biased outputs in your specific workforce context.
Align your governance. Connect your pay transparency governance structure — which should already involve HR leadership and the board — with your AI governance framework. These are not separate workstreams. They need to be coordinated.
The practical audit you need to run
Before 1 August 2026, every Irish organisation using AI in its HR function should run a structured audit of its HR technology stack. The audit should answer five questions:
- Which of our HR tools contain AI features?
- Do any of those features touch pay-setting, promotion, or recruitment decisions?
- Has each of those systems been assessed for EU AI Act compliance — by the vendor, and by us as deployer?
- Do we have documented human oversight and audit trail mechanisms in place?
- Is our board informed of the AI Act obligations that sit alongside our pay transparency compliance programme?
If the answer to any of questions three through five is no, you have work to do before August.
The bigger picture
The convergence of the Pay Transparency Directive and the EU AI Act is not accidental. Both regulations are part of the EU's broader effort to make employment decisions fairer, more transparent, and more accountable. They are designed to work together — and they do, once you understand their combined requirements.
The organisations that will manage this well are those that treat pay transparency and AI governance as connected problems, not parallel workstreams. The job architecture, the pay equity analysis, the bias testing, the audit trail — these serve both regulations simultaneously. The governance structure that satisfies one can, if designed correctly, satisfy both.
The organisations that will struggle are those that have assigned pay transparency to HR and AI compliance to IT and never joined the two conversations up.
Acuity AI Advisory works with Irish employers on both pay transparency compliance and EU AI Act obligations. If your organisation is using AI tools in its HR function and has not assessed the dual compliance position, our pay transparency advisory service includes a structured review of HR AI systems against EU AI Act classification criteria — so you can fix one problem without creating another.