← Insights
·4 min read

Before You Roll Out AI in Your Practice: A Checklist for Legal Compliance

G

Ger Perdisatt

Founder, Acuity AI Advisory

Ten practical checks every Irish law firm should complete before deploying AI tools — covering data governance, privilege, insurance, training, and incident response.

Most law firms that run into problems with AI deployment did not lack good intentions. They lacked a structured process for asking the right questions before going live. What follows is a practical pre-deployment checklist for Irish legal practices. It is not exhaustive, but it covers the areas where governance gaps most often appear.

1. Classify the tool by risk

Identify whether the AI application falls under the EU AI Act's high-risk classification. Tools that assist substantive legal analysis, support access to justice decisions, or automate compliance-critical tasks face more stringent requirements than productivity tools. If you are unsure where a tool sits, treat it as potentially high-risk until you have completed the analysis. See our overview of EU AI Act obligations for Irish law firms.

2. Conduct a data residency audit

Before any contract is signed, establish where the tool processes and stores data. Confirm whether EU-hosted processing is available, whether it is the default or requires configuration, and whether the vendor or its sub-processors can access client data in identifiable form. Obtain this in writing.

3. Review data processing agreements

Every AI vendor processing personal data on your behalf is a data processor under GDPR. The DPA must include the standard required clauses: processing only on documented instructions, appropriate technical and organisational measures, sub-processor controls, and data return or deletion on termination. Generic vendor terms frequently do not meet this standard without negotiation.

4. Assess privilege implications

For each planned use case, consider whether processing client communications or privileged documents through the AI system could constitute disclosure to a third party in a way that affects privilege. This is a legal analysis, not an IT analysis. Get a view from a solicitor with data protection experience before going live with document review or due diligence applications.

5. Update your engagement letters

If AI tools will be used in client matters, consider whether your terms of engagement should notify clients of this and in what circumstances. Some clients — particularly institutional clients with their own AI governance requirements — will want explicit confirmation. Getting ahead of this is cleaner than responding to a client query mid-matter.

6. Train staff on data classification

Implement a clear internal policy on what categories of data may and may not be processed through AI tools. Highly sensitive or privilege-critical material may warrant exclusion from AI-assisted workflows regardless of the vendor's compliance posture. Training should be documented and completed before the tool goes live, not after the first incident.

7. Check your professional indemnity cover

Speak with your PII broker about how AI use in practice is treated under your current policy. Questions to ask: does the policy cover claims arising from AI-assisted advice? Are there disclosure obligations? Would an unnotified AI deployment create a coverage gap? Insurers are still developing their position on this, which is a reason to ask now rather than at renewal.

8. Map human oversight for high-risk applications

For any AI output that informs legal advice or a decision with legal consequences, document how human review is built into the workflow. The EU AI Act requires this for high-risk systems. It is also sound professional practice. An AI tool that flags issues in a contract review is only as good as the review process that acts on its output.

9. Establish an incident response procedure

Define in advance what happens if a data breach occurs in connection with an AI tool. Who is notified, in what order, within what timeframes? A breach involving client data in a legal context may engage GDPR notification requirements (72 hours to the DPC), client notification obligations, and Law Society reporting. These timelines do not accommodate improvisation.

10. Set a review cadence

AI tools and the regulatory environment around them are both moving. Build a six-monthly review into your governance cycle to reassess which tools are in use, whether their risk classification has changed, and whether staff training remains current. The August 2026 EU AI Act deadline for high-risk applications is a forcing event — use it to establish a governance rhythm that continues beyond it.

This kind of structured pre-deployment work is not bureaucratic overhead. It is what distinguishes a firm that can demonstrate responsible AI use from one that cannot. If you want support working through these checks, we can help.

legalai governance