Legal Sector

AI Policy Template for Solicitors

A generic AI policy is not a law firm AI policy. A solicitors' AI policy must map directly to the Law Society's guidance, address client confidentiality obligations, and set enforceable verification requirements for AI-assisted work.

Request the template

TL;DR

An AI policy for solicitors needs to do more than list rules. It must map directly to the Law Society's December 2025 AI guidance, address confidentiality obligations for AI-processed client data, set verification requirements for AI-generated content in legal documents, and establish supervision obligations for employed solicitors using AI.

What a law firm AI policy must actually do

Most AI policy templates circulating in the legal sector are derivatives of corporate IT policies. They address technology governance and data classification, but they do not address the specific obligations that arise from solicitors' professional conduct duties. A policy that does not map to the Law Society's December 2025 AI guidance is not a law firm AI policy — it is a generic policy applied to a law firm setting.

The critical difference is in verification and supervision. A corporate AI policy might require human review of AI outputs. A solicitors' AI policy must address who must review, what constitutes adequate review, when the supervising solicitor must apply their independent professional judgement rather than accepting AI output, and how that review is documented for the purposes of professional accountability.

Confidentiality obligations add a further dimension. When client matter information enters an AI tool — for drafting, research, or analysis — the solicitor's confidentiality duty follows the data. The policy must address what information may and may not be inputted into AI systems, what assessment has been made of vendor data handling, and what client consent (if any) is required.

What a complete solicitors' AI policy covers

  • Scope and definitions — what counts as an AI tool for the purposes of this policy
  • Approved tools and approval process — how new tools are assessed and authorised
  • Permitted uses — what AI tools may be used for in client work
  • Verification requirements — what human review is mandatory before AI output is used
  • Confidentiality and data handling — where client data goes and how it is protected
  • Disclosure obligations — when and how clients are informed of AI use
  • Supervision obligations — principal and partner responsibilities for employed solicitors
  • Prohibited uses — explicit list of what is not permitted
  • Incident reporting — what to do when an AI error occurs in client work
  • Review schedule — how and when the policy is updated as the technology and guidance evolves

EU AI Act obligations that the policy must address

The EU AI Act imposes deployer obligations on firms that use AI systems — and law firms are deployers. The policy must address the Act's requirements alongside the Law Society's guidance: they are separate frameworks and both must be satisfied.

For high-risk AI systems — which may include AI used in legal proceedings or for decisions with significant impact on individuals — the Act's deployer obligations include ensuring conformity assessments have been conducted, implementing human oversight measures, maintaining logs of AI use, and providing transparency to individuals affected by AI-assisted decisions.

The LSRA is the designated sectoral regulator for Irish law firms under the EU AI Act. The August 2026 enforcement deadline means policies must be in place and operational — not still in draft — by that date. A policy that exists only on paper, without accompanying procedures, training, and oversight mechanisms, does not satisfy the obligation.

Why Acuity AI Advisory

Acuity AI Advisory is a contributor to the Law Society of Ireland's AI governance toolkit. The AI policy template for solicitors is grounded in the Law Society's December 2025 guidance and in direct familiarity with how the profession's regulatory framework operates — not adapted from a generic corporate template.

Ger Perdisatt, who leads all engagements, is a former COO of Microsoft Western Europe and holds non-executive directorships at DAA and Tailte Éireann. The governance frameworks Acuity AI produces are informed by board-level governance experience and by practical knowledge of how AI is actually deployed in organisations — not by how policy documents describe it.

The template is delivered as an adaptable working document, not a static PDF. It is accompanied by a briefing on how to implement it effectively — including how to communicate it to fee earners, how to integrate it with existing professional indemnity insurance obligations, and how to maintain and update it as AI and regulatory guidance evolves.

Questions

Common questions

What must a law firm AI policy include?

A law firm AI policy must address: which AI tools are approved for use and under what conditions; verification requirements — what human review is required before AI-generated content is used in client work or filed with courts; confidentiality obligations — where client data goes when it enters an AI system and how that is managed; disclosure obligations — when must clients be informed that AI was used in their matter; supervision obligations — how partners and principals oversee AI use by employed solicitors; and prohibited uses — what AI tools or applications the firm does not permit. A policy that addresses any fewer of these areas is incomplete.

Does the Law Society require solicitors to have an AI policy?

The Law Society of Ireland's December 2025 AI guidance does not mandate a specific AI policy document. However, it makes clear that existing professional obligations — competence, confidentiality, supervision, and responsibility for work produced — apply in full when AI tools are used. In practice, this means a firm cannot meet those obligations without a policy that governs how AI is used. The absence of a policy is not a defensible position if a complaint arises about AI-assisted work. It is also inconsistent with the Law Society's clear direction that firms should approach AI governance proactively.

What AI uses should a solicitor's AI policy prohibit?

A law firm AI policy should explicitly prohibit: processing client-privileged or confidential information through AI tools that have not been assessed and approved by the firm; using AI-generated content in court documents, legal opinions, or client advice without independent human verification; using AI tools that retain or train on inputted data without explicit client consent; allowing AI to make autonomous decisions in client matters without solicitor review; and using personal AI accounts for firm work. The prohibition list matters as much as the permission list — an AI policy without clear prohibitions is aspirational rather than operational.

Request the AI Policy Template for Your Practice

Adaptable. Law Society-aligned. Fixed-fee delivery.

Get in touch