The EU AI Act entered into force on 1 August 2024. The first obligations took effect six months later. Ireland's AI Office becomes fully operational in August 2026. For Irish SMEs, this means the window for preparation is narrowing — but the situation is not as complicated as the headlines suggest.
What the Act actually covers
The EU AI Act is a risk-based framework. It does not ban AI. It classifies AI systems by their potential to cause harm, and attaches obligations based on that classification. Most AI systems in everyday business use fall into the minimal-risk category — which carries no mandatory obligations beyond good practice.
The obligations become material for high-risk AI systems: those used in hiring and recruitment, credit decisions, insurance underwriting, medical device support, and critical infrastructure. If your business uses AI in any of these areas — or uses a third-party tool that does — you are likely in scope.
Which AI uses trigger obligations for SMEs?
The answer is more specific than many assume. The following types of AI use are likely to trigger obligations:
Automated CV screening or shortlisting. Any AI system that filters or ranks candidates for employment is classified as high-risk under the Act. If you are using a recruitment platform with an AI component, you need to understand what it does.
Credit and risk decisioning. Tools that automate credit scoring, insurance decisions, or financial risk assessment are in scope. This includes off-the-shelf tools from fintech providers.
Customer-facing automated decisions. If your AI system makes or significantly influences decisions that affect customers — in pricing, eligibility, or service — you have obligations.
Document processing with legal effect. AI used to process legal documents, contracts, or regulatory filings is within scope where those documents affect individuals' rights.
What Irish SMEs should do now
The most common mistake is to wait for clarity before acting. The Act is in force. The obligations exist. What is still developing is the enforcement infrastructure — but that is a reason to prepare, not to defer.
A practical approach for most SMEs:
First: inventory what you use. Many organisations have AI embedded in tools they did not consciously choose — CRM systems, productivity software, recruitment platforms, document automation. The first step is knowing what is in use.
Second: classify your risk. For each AI use, apply the Act's classification logic. Most business applications are minimal or limited risk. Identify which, if any, are high-risk.
Third: document your position. Even for minimal-risk AI, maintaining a clear record of what you use and why strengthens your position in any regulatory review.
Fourth: review your supplier contracts. If you use AI systems through third-party platforms, understand where provider obligations end and your own begin. This is not always clearly stated.
What the Act is not
The EU AI Act is not a requirement to stop using AI. It is not a mandate to engage expensive compliance infrastructure. For most Irish SMEs, the practical outcome of a proper review is confirmation that their current AI use is in the minimal-risk category and that a small amount of documentation is sufficient.
The value of the review is the clarity it provides — and the early identification of any areas where obligations are more substantial.
The Acuity AI position
We work with Irish organisations to conduct structured readiness reviews: inventory, classification, gap analysis, and remediation roadmap. The output is practical and usable, not a generic framework. Vendor-neutral — we have no platform to sell.
If you are uncertain about your position under the EU AI Act, the starting point is a structured diagnostic — not a consultation with a vendor who has an interest in the outcome.