Story
The evolution of AI in the legal profession: Reflections from the frontline
By Rory O’Keeffe, Founder of RMOK Legal, Trustee of The Solicitors’ Charity, and AI technologies lawyer
In recent years, artificial intelligence (AI) has emerged as a key tool for all industries, including legal professionals. As someone who advises on AI and innovative technologies, I’ve witnessed its swift evolution, especially within legal research. In my previous blog, I noted that AI is quickly becoming an indispensable asset across law firms, in-house legal teams, and even in expanding access to justice. One year on, this is even more the case.
Aligning law and AI in the real world
Legal research platforms, once the dependable mainstays of solicitors’ practices, have been transformed. Many early providers have upped their game, launching AI-powered tools to streamline access to case law, legislation, and commentary. Some now embed directly within Word, offering clause suggestions or template drafting assistance, online tools that can sit comfortably within existing workflows.
“Legal firms are scrutinising not only capability, but also return on investment. AI tools may promise faster turnaround and broader insights, but they come with costs: financial, operational, and ethical. This increased scrutiny is healthy; it ensures legal tech evolves responsibly”.
I’m encouraged that many of these tool developers are not claiming to have produced a finished product. They’re listening to feedback, adjusting course, and building with improvement in mind. That iterative, user-led development process mirrors how sound legal arguments are formed – with care, challenge, and revision.
Delivering time savings
The most significant benefit AI has delivered so far is simple: time savings. Legal technology has always promised to cut admin, whether through Outlook, case management systems, or e-discovery. AI takes that further. Many of today’s tools produce a ‘junior lawyer’ level response, helpful for a first pass at case summaries, precedent searches or drafting.
Of course, this brings us to the hallucination problem. We’ve seen the high-profile examples – lawyers presenting fabricated case law thanks to overly trusting an AI-generated output. So, the most sophisticated research tools now include source citations and bibliographies. Like any good junior lawyer, the AI presents you with the evidence trail for verification. It’s still up to the seasoned legal professional to interrogate and interpret that information.
The net result is that lawyers can navigate large volumes of information more quickly and cost-effectively. But as I often say: eyes wide open. Don’t be seduced by the hype. One recent report described the current AI landscape as “a skyscraper built on quicksand.” If you consider that AI is built on data, the data is, in theory, out of date as soon as it is uploaded. Implementation and ongoing development must be done with care.
Access to justice
While my own work isn’t in litigation and access to justice, I’ve heard compelling examples from those who are. There are now AI-powered chatbots that can guide individuals through common issues, such as tenancy disputes, immigration questions, basic employment rights, and providing draft letters and template responses.
This doesn’t replace legal advice, but it provides a critical first step for those who otherwise wouldn’tengage with a solicitor.
“It’s a win-win: the public gets informed; firms can triage enquiries more efficiently and direct their time where it adds real value. This shift is not just technological, it’s also a subtle shift in legal business models, allowing firms to offer clarity up front without commitment or cost”.
New responsibilities
There’s a perception that AI may lessen workloads. That’s partly true, but it also introduces a new responsibility: understanding the tools you use.
In the past, we could rely on IT departments and third parties to manage our tech stack. Today, lawyers must engage with how these systems operate. Where does the data go? How secure is it? How does it interface with other systems? AI is no longer “just another programme”. If it produces a result you act upon, you must be confident in its integrity and reliability. That’s especially true when dealing with client-confidential material. Responsible AI isn’t just about ethics, it’s about being able to justify and trace the conclusions your tools produce.
Smaller firms step up
Perhaps most exciting is the uptake among smaller firms. Once priced out of advanced technology, they’re now benefitting from affordable, accessible AI tools. As larger firms incorporate AI into standard packages, smaller firms are following quickly, leveraging off-the-shelf tools that can be customised and scaled to their needs. The legal tech gap is narrowing.
De-Mystifying AI
One of the ongoing challenges is the confusion about what AI is. AI in legal tech isn’t always about sentient machines or predictive judgement. Often, it’s simply advanced machine learning – tools trained to respond to inputs based on data patterns, and to evolve with ongoing use.
Some tools learn only from what you feed them. Others, like Gemini or Copilot, pull in insights from across the web. Each comes with strengths and risks. If a model trains on publicly available law firm insights, it may miss nuances or be overly generic. The best advice is this: Treat AI’s output like a junior’s memo – it’s a starting point, not a conclusion. In fact, a good junior lawyer will be able to give you a reasoned conclusion, which the AI tools are (so far) unable to do.
Bias and the human factor
AI doesn’t operate in a vacuum. Data bias, privacy concerns, and inadvertent exclusion are real. The best example I’ve seen came from a recruitment tool designed to be fair and inclusive. By narrowing candidate searches to a 10-mile radius, with the intention of hiring locally, it ended up selecting an overwhelmingly white cohort. Why? Because the office was in an affluent area where, due to systemic racism, the majority of residents were white. The result wasn’t intentional, but it was a powerful reminder that bias lives in data as much as in people.
In legal practice, especially employment and discrimination law, these nuances are critical. Lawyers must be vigilant about the data AI is trained on and the assumptions built into its algorithms.
What’s next for AI in law?
So far, I haven’t seen any entirely new AI technologies emerge, just refinements of what’s come before (eg, Agentic AI). But that’s not a bad thing. The market is stabilising. The hype is cooling. Real innovation tends to follow once the dust settles.
The next phase may depend heavily on UK legislation. The AI regulation debate continues, with the possibility of a more industry-friendly framework emerging via product liability law or sector-specific rules. That could offer clarity and confidence, encourage responsible innovation while maintain public trust.
Stay human
AI will never replace lawyers. But it will – and already is – reshaping how we work, what we offer, and how we structure our time and deliver value. It won’t eliminate judgement, empathy, or nuance. But it can reduce the drudgery, speed up the research, and open the door to those who need guidance most.
“The tools are improving. So must we. But above all: be aware, be informed, and stay human”.