Generative AI tools are already in use across most Irish organisations — often without a governance framework to manage them. This checklist covers the essentials.
Generative AI has entered Irish workplaces faster than governance frameworks have followed. ChatGPT, Microsoft Copilot, Claude, Gemini — and dozens of sector-specific tools built on these platforms — are being used daily by employees across organisations that have not yet established clear policies for their use. That gap creates real risk.
This checklist covers the governance essentials for Irish organisations that have deployed or are planning to deploy generative AI tools.
Policy foundations
Have you published a generative AI acceptable use policy? Employees need to understand what is permitted and what is not — what data can be shared with AI tools, what outputs require human review, what uses are prohibited. Without a policy, use patterns are shaped by individual judgement, which produces inconsistent and sometimes high-risk behaviour.
Does your policy address confidentiality and data sharing? Generative AI tools process the inputs they receive. Employees sharing confidential client information, personal data, or commercially sensitive material with external AI tools are creating data governance risks. The policy must address this explicitly — not leave it to employees to infer.
Have professional body obligations been considered? For regulated professions — solicitors, accountants, financial advisers — professional body guidance on AI use must be reflected in the organisation's policy. Law Society guidance, for example, makes clear that professional responsibility for client work remains with the practitioner regardless of AI involvement.
EU AI Act considerations
Have your generative AI tools been classified under the EU AI Act? Most generative AI tools used for drafting, summarising and research fall into the limited or minimal risk categories under the EU AI Act. However, generative AI used in contexts that influence regulated decisions — financial advice, legal analysis, HR assessments — requires closer classification scrutiny. Do not assume the tool is low-risk without checking.
Do you know your vendor's EU AI Act compliance position? The EU AI Act distinguishes between general-purpose AI (GPAI) providers and deployers. GPAI providers — including the developers of the major foundation models — carry transparency and documentation obligations. Deployers carry their own obligations regardless of the vendor's compliance position. Understand what your vendors have committed to and what gaps you need to fill.
Operational governance
Is there a human review process for consequential AI outputs? Generative AI outputs should not be used in consequential decisions without human review. This includes client-facing communications, legal documents, financial analyses, and HR decisions. The review process should be named, documented and actually practised — not a nominal sign-off.
Is AI use being monitored at an organisational level? Individual employees' AI tool usage is often invisible to the organisation. This makes it impossible to identify high-risk patterns, understand the scope of AI use, or demonstrate governance to regulators. Consider what visibility is needed and how to achieve it.
Do employees know when and how to disclose AI use? In some contexts — client work, regulatory submissions, published content — disclosure of AI use is expected or required. Employees need clear guidance on when disclosure is appropriate and how to communicate it.
Data and security
Has your IT security team assessed the data security position of AI tools in use? Generative AI tools vary significantly in how they handle input data — whether it is retained, used for training, accessible to the vendor, or subject to geographic data residency requirements. These questions need IT security assessment before tools are deployed at scale.
Is personal data being processed in compliance with GDPR? Inputting personal data into generative AI tools constitutes processing under GDPR. If the tool is provided by a third party, a data processing agreement is required. Many organisations have deployed AI tools without this basic GDPR infrastructure in place.
What to do next
If your organisation cannot answer yes to most of these questions, the priority is a generative AI governance review — producing a policy, a risk classification, and an operational governance framework that addresses the gaps.
Acuity AI Advisory provides AI governance frameworks for Irish organisations across regulated industries and professional services. Contact us to discuss your position.