The conversation about artificial intelligence in legal services has generated considerably more heat than light. Vendors promise transformative efficiency. Commentators warn of civilisational disruption. Regulators issue principles. What is in shorter supply is a clear account of what the existing legal and regulatory framework actually requires of law firms and legal practitioners using AI — today, not in some hypothetical future. This article attempts to provide that account.
The short answer is that the UK's regulatory framework for AI in legal services is fragmented, incomplete, and in several respects insufficient. It does not, however, leave practitioners without obligations. The SRA Standards and Regulations impose duties that apply irrespective of the technology used to discharge legal work. Case law is developing rapidly, with courts increasingly willing to sanction firms for AI-related failures. And the publication of the UK Jurisdiction Taskforce's Legal Statement on Liability for AI Harms in January 2026 has clarified — in ways that should concern every managing partner — precisely where liability will rest when AI outputs cause client loss.
The UK's Regulatory Architecture: No AI Act, Existing Regulators
The starting point is structural. The UK does not have, and is not currently expected to have before mid-2026 at the earliest, a standalone AI Act equivalent to the EU's comprehensive legislative framework. The previous government's declared "pro-innovation approach" to AI regulation deliberately avoided creating a central regulatory body or overarching statute. Instead, the UK relies on existing sector regulators — the Solicitors Regulation Authority, the Information Commissioner's Office, the Financial Conduct Authority, the Medicines and Healthcare products Regulatory Agency — each applying AI oversight through their existing mandates and principles.
This approach offers flexibility. It also creates genuine operational ambiguity. A law firm deploying an AI tool for document review, client intake, or legal research faces a patchwork of obligations drawn from data protection law (UK GDPR and the Data Protection Act 2018, as modified by the Data (Use and Access) Act 2025), professional conduct rules, consumer protection frameworks, and common law duties — with no single authoritative source telling it how those obligations interact in an AI context.
The Data (Use and Access) Act 2025, now being phased in between June 2025 and June 2026, is worth noting specifically. It relaxes the previous restrictions on automated decision-making under UK GDPR Article 22, allowing organisations greater scope to use AI-driven decision tools, provided that appropriate safeguards and human oversight remain in place. For firms using AI to produce client advice or conduct assessments that inform legal outcomes, this is a meaningful change — but it does not remove the professional conduct obligations that sit alongside the data protection framework.
What the SRA Standards and Regulations Actually Require
The SRA has offered no substantive guidance specifically addressing AI use in legal practice. This is a significant regulatory gap, and one that the Law Society has publicly pressed the SRA to address. In the meantime, practitioners must work from the existing Standards and Regulations, which are not as silent on the question as their age might suggest.
Three rules are of immediate relevance. Rule 3.2 of the SRA Code of Conduct for Solicitors requires that solicitors ensure the service provided to clients is competent and delivered in a timely manner. This obligation is technology-neutral: it applies whether a piece of legal work is produced by a senior partner dictating from memory or by a generative AI tool generating a first draft at two o'clock in the morning. The standard of the output — not the method of its production — is what Rule 3.2 measures.
Rule 3.5 addresses supervision and delegation. Where work is carried out on a solicitor's behalf by others, the solicitor remains accountable for that work. The SRA and the courts have consistently extended this principle to technology-assisted work. A firm that deploys an AI tool to conduct legal research, draft correspondence, or analyse contractual provisions is not delegating its professional responsibilities to the tool; it is using the tool as an instrument and retaining full accountability for the outputs the tool produces. The failure to verify, review, or critically assess AI output is therefore not a mitigation — it is itself a breach.
Rule 5.1 imposes the duty to safeguard client confidentiality and legal professional privilege. This requirement presents acute challenges in the AI context. Many commercially available large language model tools involve the processing of inputs by third-party infrastructure, raising real questions about where client data goes, who can access it, and whether privilege attaches to communications mediated through AI tools. Firms that have not conducted a proper data protection impact assessment and established contractually robust terms with AI vendors before deploying those tools against client matters are almost certainly in breach of Rule 5.1, even if they have not yet encountered the adverse consequence that makes the breach visible.
Garfield.Law and What the SRA's First AI Authorisation Actually Tells Us
In May 2025, the SRA authorised Garfield.Law Ltd as the first AI-driven law firm in England and Wales — and, by the SRA's own account, the first in the world. Co-founded by former City litigator Philip Young and quantum physicist Daniel Long, Garfield.Law uses a large language model to guide small and medium-sized businesses through the small claims court process, handling matters up to £10,000 in value. The SRA's Chief Executive, Paul Philip, described the decision as "a landmark moment for legal services in this country."
The Garfield.Law authorisation is significant not only for what it permits but for the conditions the SRA attached to it. Those conditions are instructive because they represent the closest thing to concrete SRA thinking on AI governance that currently exists. The SRA required: that the AI system not propose case law (to mitigate the hallucination risk); that client approval be obtained at every step in the process before any action is taken; that named regulated solicitors remain ultimately accountable for all system outputs and for anything that goes wrong; and that minimum insurance requirements be maintained for client protection. The firm is not, in the SRA's formulation, "autonomous" — the human approval and oversight architecture is the price of authorisation.
For firms deploying AI in less structured ways — using general-purpose tools for drafting or research without the bespoke oversight conditions the SRA imposed on Garfield — the Garfield conditions should serve as a benchmark. If the SRA considered those safeguards necessary for a purpose-built, narrowly scoped AI system with human sign-off at every stage, the absence of equivalent safeguards in a general deployment context is a conspicuous gap.
AI Hallucinations in the Courts: From Anecdote to Pattern
The phenomenon of AI hallucination — the generation by large language models of plausible but entirely fabricated case citations, statutory provisions, or legal propositions — has moved in less than two years from an academic curiosity to a documented pattern of judicial sanctions in UK courts and tribunals.
By the end of November 2025, the UK had recorded 24 confirmed incidents of AI-generated false citations appearing in court documents, part of an international trend exceeding six hundred cases globally. These are not merely embarrassing; they carry serious professional and legal consequences. The case of Choksi v IPS Law LLP is illustrative. A managing partner's witness statement was found to contain fabricated cases, invented authorities, and misleading "precedents." A paralegal subsequently confirmed reliance on Google's AI tool. The firm was criticised for having no meaningful verification system in place.
The judicial response has hardened. The High Court has now issued warnings that the submission of AI-generated false information to a court could expose lawyers not merely to professional regulatory sanction but to criminal liability, including contempt of court and potentially perverting the course of justice. These are not hypothetical risks. A firm that submits an AI-drafted document to a court without verifying the accuracy of its citations, authorities, and legal propositions is operating in territory where both regulatory and criminal exposure are live.
The lesson is not that AI must not be used in contentious work. The lesson is that AI output must be treated as a draft that requires human verification, not as a product that can be passed to the court in reliance. The two are not the same thing, and the failure to observe the distinction is what is generating sanctions.
The UKJT Legal Statement: Existing Law Is Sufficient, and That Is Not Reassuring
In January 2026, the UK Jurisdiction Taskforce — a body established by the Ministry of Justice to clarify key questions at the intersection of technology and law — published its Legal Statement on Liability for AI Harms under the private law of England and Wales. The Statement is open to consultation until 13 February 2026 and is non-binding on courts, but it represents the most authoritative account to date of how English law is likely to approach AI liability questions.
The Statement's central proposition is that existing English private law — principally the law of contract and negligence — is adequate to accommodate AI harms. New legislation is not required. AI does not have legal personality under English law and therefore cannot be held responsible for physical or economic harm; liability must instead be attributed to the legal persons who develop, train, deploy, or use the system. As the Statement puts it, "in many cases the addition of AI will simply be considered a tool of those who exercise relevant control over it."
For law firms, the professional negligence dimension of the Statement is the most directly relevant. The Statement applies orthodox negligence analysis — duty of care, breach, causation, remoteness — to AI-assisted legal work, and it confirms that the integration of AI into legal services does not alter the duty owed to clients. What it does alter is the factual matrix within which breach is assessed: a solicitor who relies blindly on AI output without verification may be in breach of the standard of care expected of a reasonably competent practitioner; but equally, a solicitor who refuses to use AI tools where competent practitioners in their field routinely use them may also be in breach of that standard, on the basis that their failure to use available technology is itself a departure from the expected standard.
That second proposition — negligence for failing to use AI — is not yet the subject of decided English authority, but its logic is consistent with the direction of the case law on technology adoption in professional contexts. Practitioners should not assume that the cautious path of avoiding AI is the safe path.
The EU AI Act: Cross-Border Firms Cannot Ignore It
The EU AI Act, the world's first comprehensive binding AI regulatory framework, reaches full implementation on 2 August 2026. Although it is not directly applicable in the UK following Brexit, it remains highly relevant to any firm with EU operations, EU clients, or that uses AI tools developed or deployed by EU-based providers.
The Act's risk-based categorisation is particularly significant for the legal sector. AI systems used in the "administration of justice and democratic processes" fall within the Act's high-risk category under Annex III. This includes AI tools used to assist courts and tribunals, to produce legal analysis that informs judicial or quasi-judicial decisions, or to automate aspects of dispute resolution. Providers and deployers of high-risk AI systems face substantial obligations: conformity assessments, registration in an EU database, technical documentation requirements, human oversight mechanisms, and incident reporting obligations.
For cross-border firms advising on matters with an EU dimension, or using AI tools sourced from EU providers, the August 2026 deadline is not a distant consideration. Compliance assessments, vendor due diligence, and the mapping of AI deployments against the high-risk taxonomy need to be underway now if firms are to be compliant on the date the obligations bite.
The Duty of Competence Gap and What Firms Should Be Doing
The honest assessment of where the regulatory framework currently stands is that it imposes real obligations — through the SRA Standards, the law of professional negligence, data protection law, and (for EU-connected firms) the AI Act — but offers practitioners little concrete guidance on how to discharge those obligations in practice. The SRA has not issued substantive guidance on AI use. Most firms are creating protocols on an ad hoc basis, calibrated to their own risk appetites rather than any regulatory benchmark.
That is an unstable position. The question is not whether a compliance failure will occur in the sector — it will, and in a number of cases it already has — but which firms will be well-positioned to respond when it does. The firms that are well-positioned will be those that have, before the event:
A Framework in Motion
The UK's approach to AI regulation in legal services is unlikely to remain static. The Law Society has pressed for clear regulatory guidance and preservation of client protections throughout AI usage, including confidentiality and legal professional privilege. A comprehensive UK AI Bill — delayed from its original March 2025 timetable — is now expected no earlier than May 2026. The ICO is expected to publish a statutory Code of Practice on AI and automated decision-making in the near term. The SRA will, at some point, be forced by events to issue substantive guidance on AI competence and supervision.
The firms that wait for that guidance before building their AI governance frameworks are adopting a strategy that has already produced regulatory censure and judicial sanctions for their peers. The obligations under the SRA Standards and Regulations, and the liability framework confirmed by the UKJT Legal Statement, do not depend on the SRA having issued a practice note. They are in force now, and they will be applied by courts and regulators to conduct that is occurring today.
The appropriate response is not to avoid AI — the direction of negligence law suggests that may itself become a liability. The appropriate response is to deploy AI as a sophisticated practitioner deploys any complex tool: with an understanding of its limitations, a framework for managing its risks, and the professional judgement to know when its outputs require revision, rejection, or independent verification. That is what competence has always required. AI does not change that standard; it changes the context in which it must be applied.
Lexkara & Co advises firms and practitioners on regulatory compliance, professional liability, and the governance frameworks necessary to deploy emerging technologies in legal practice. If you have questions about your firm's AI governance obligations or exposure, we welcome your enquiry.