Artificial Intelligence (AI) and Large Language Model (LLM) Notice

Artificial Intelligence (AI) and Large Language Model (LLM) Notice

April 2025

Commitment to Responsible and Ethical Innovation

Artificial Intelligence (AI) technologies, including Large Language Models (LLMs), are sometimes used to support and enhance legal services. These technologies are implemented in a way that upholds the professional duties under the Solicitors Regulation Authority (SRA) Standards and Regulations, including the SRA Code of Conduct for Solicitors. I remain committed to delivering legal services with the highest standards of competence, integrity, confidentiality, and transparency.

This notice explains how AI and LLMs are used, their limitations, and the ethical and regulatory principles guiding the approach taken for such use of these technologies.

Distinguishing AI and LLMs in Legal Applications

Artificial Intelligence (AI)

AI refers to a broad category of computer systems capable of performing tasks typically requiring human intelligence, such as prediction, automation, or data analysis. In legal practice, AI may assist with:

– Contract review and analytics
– Document classification and tagging
– Workflow automation and case triage

Large Language Models (LLMs)

LLMs are a subset of AI focused on understanding and generating human language. In a legal context, LLMs are used for:

– Drafting and summarising legal documents
– Assisting with legal research
– Generating template-based clauses
– Powering chat-based support tools

Unlike human lawyers, LLMs do not possess legal understanding or reasoning. They generate text based on patterns in their training data, which may include inaccuracies or “hallucinations”, which are false but plausible-sounding outputs.

Main Limitations of AI and LLMs in Legal Practice

Hallucinations and Inaccuracy

LLMs can generate fictitious legal authorities or incorrect conclusions. All outputs are reviewed by a suitably qualified and experienced solicitor before use in any legal context.

Lack of Legal Reasoning

LLMs do not “understand” legal rules or case context and must never be relied upon for legal judgment or advice.

Bias

AI tools may reflect or amplify biases present in training data. Steps are taken to assess and mitigate this risk, aligning with my commitment to fairness and inclusion.

Confidentiality Risks

Personal data or confidential client information is not used or inputted into public AI tools. Where internal tools are used, appropriate data protection, confidentiality, and access controls are enforced to comply with those duties under the UK GDPR and SRA Code of Conduct.

Transparency and Accountability

AI/LLM processes can be opaque. Where relevant to do so, clients are informed about how these tools are used and that a suitably qualified and experienced lawyer retains full responsibility for all legal outputs.

Rapidly Evolving Standards

Regulatory developments and ethical guidance from the SRA and other bodies are reviewed to ensure practices remain current and compliant.

Fine-Tuning for Legal Use

Purpose and Benefits

Fine-tuning LLMs on firm-specific or legal domain data improves the relevance and accuracy of outputs. This is most effective where:

– outputs need to match jurisdictional nuances or firm-preferred styles; and
– the subject matter involves complex, niche, or specialist legal knowledge.

Limitations

Even when fine-tuned, all outputs are reviewed and validated by a suitably qualified and experienced lawyer.

Ethical and Regulatory Framework

Human Oversight

No legal advice is provided solely by AI or LLMs. All outputs are subject to review and approval by a suitably qualified and experienced lawyer, in accordance with SRA Principle 5 (providing a proper standard of service).

Transparency

AI/LLM tools are only used as secondary research aids. Outputs are always corroborated against primary legal sources, such as legislation and case law.

Confidentiality and Data Protection

Full compliance with the UK General Data Protection Regulation (UK GDPR) and the SRA’s confidentiality obligations are maintained. Client data is only processed using AI/LLM systems when adequate data protection safeguards are in place.

Bias Mitigation

AI outputs are reviewed to identify and correct potential bias. We remain committed to fairness and non-discrimination in the delivery of legal services.

Accountability

The use of AI or LLMs does not diminish the professional accountability that is owed to you, or a client’s ability to rely on legal advice provided. Full responsibility for all legal services provided remain with the lawyer providing advice to clients.

Continuous Review and Governance

This notice is under review and will be updated from time to time and in line with technological developments, legal and regulatory changes, and professional best practices.

Questions and Contact

If you have any questions about the use of AI or LLMs in legal work, or would like further information, please contact me at: jason@converselaw.com or via my Contact Page.

We help ambitious businesses reach their goals