The arrival of generative AI has prompted both enthusiasm and considerable anxiety within the legal profession. For commercial lawyers, the question is no longer whether AI will change how they work, but how quickly and on whose terms.
A commercial property solicitor advising on the implications of a service charge dispute or a family solicitor advising on divorce proceedings must provide their client with accurate advice.
An error can trigger professional negligence claims, regulatory censure or catastrophic commercial loss.
This creates an obvious tension with the most frequently cited limitation of generative AI: its tendency to hallucinate — to produce confident, fluent, and entirely fabricated statements of law, including invented citations, misattributed holdings, and statutory provisions that do not exist.
Where AI can support legal professionals
Generative AI should not replace the human judgment involved in negotiating a rent review mechanism, deciding whether a personal guarantee is appropriate or advising on the interplay between a lease’s alienation provisions and the client’s business needs.
What it is increasingly relied upon for (through reliable sector-specific, subscription legal generative tools, as opposed to freely available chatbots like ChatGPT) is to compress the administrative burden that precedes that judgment.
This includes:
- Summarising title documents and search results
- Cross-referencing registers
- Flagging missing consents
- Assembling disclosure schedules
A solicitor who arrives at a complex title query with that groundwork ready to review is better placed to provide an efficient service in exercising professional judgment than one who has spent the preceding hours doing it manually.
Responsibility cannot be delegated to a machine
The Law Society of England and Wales has been clear: human oversight is essential, and solicitors remain professionally accountable for all advice provided, regardless of what tools assisted in its preparation.
AI is a tool, and the professional obligations of the lawyer using it are unchanged. A solicitor who relies on a bare AI model without appropriate verification or uses a consumer-facing product where client data may be inadequately protected is potentially in breach of professional duties.
A solicitor who uses a content-grounded, citation-backed product within a secure enterprise environment, reviews the output against verifiable sources and applies their own professional judgment is doing something meaningfully different, and something the regulatory framework is broadly equipped to accommodate.
How Palmers uses AI responsibly
Given the above concerns, Palmers’ policy is to limit the generative AI which we use for legal research, analysis and risk assessment exclusively to verified, curated, jurisdiction-specific content from market-leading legal information providers.
The platform we are exploring provides not merely the text of decided cases but editorial summaries, citator information and practitioner commentary that is not generated or based on scraped public data. Instead, it is authored, edited and updated by identifiable legal experts who are accountable for its accuracy.
The technology utilises tooling that reduces the risk of hallucination. The benefits of a model backed by a retrieval-augmented generation (RAG) system are significant and often insufficiently understood. A standard large language model draws on only its own pretraining weights, with all the risks of hallucination that entails.
A RAG-based system mitigates this risk by augmenting the LLM with a search tool to retrieve answers from a curated, authoritative corpus of content.
When the corpus consists of jurisdiction-specific, editorially reviewed legal content, such as decided cases with citations and practitioner commentary authored by identifiable legal experts, the system draws from highly reliable sources. This gives the user the means to fact-check its output quickly and confidently.
The AI tool is, in effect, the icing on the cake. The cake itself is comprised of layers involving both the reliable data content, representing the accumulated editorial labour of generations of legal scholars and practitioners assembled within proprietary databases that cannot be freely reproduced, and the highly skilled lawyers at Palmers who possess the experience and ability to review and use both the data content and the generated results to best effect.
Critically, the platform also keeps client information confidential. Data is not used to train the underlying model. AI-generated output is backed by sourced, verifiable legal content that is auditable and traceable.
Our use of any platform incorporating generative AI will remain provider-agnostic. Palmers continuously reviews the market to ensure suitable, reliable tools continue to be used as the landscape develops.
Why law firms must engage with AI
Firms that do not engage with generative AI tools will increasingly find themselves at a structural disadvantage. They will be slower, more expensive and less competitive on the more commoditised elements of transactional work, as clients become more familiar with what AI-assisted legal services can deliver.
Palmers is investing in content-grounded, professionally validated tools precisely because that distinction matters.
The accountability for advice sits with the lawyer, not the machine. While the icing is certainly attractive, it is still the cake that matters most.
If you have any questions about how we use AI to enhance our legal offering at Palmers, get in touch with our experts.