Business of Law Firms:
Generative AI​

By Jake Rickman​

What do you need to know this week?

This week, we will finally delve into the (potential) impact of generative AI on commercial law firms. Guiding our coverage is a brief expose by The Lawyer on Eversheds Sutherland’s recent unveiling of its “generative AI roadmap”. In particular, we will try and generate some use cases for the technology, as well as consider the risks involved and how that might limit its application.

We have shied away from looking at generative AI in much detail, largely because it is hard to separate the noise from the substance (as was the case with crypto and blockchain technologies last year). However, as Eversheds’ leadership observes, the release of ChatGPT in November means that there is a new “technology in play, which goes to the heart of how knowledge services are delivered.” Accordingly, law firms need to start contending with it.

The Substance in Eversheds’ Road Map

As to be expected, there is little in the way of substantive examples and use cases of generative AI in Eversheds’ road map, at least as presented to The Lawyer. A dominant reason for this no doubt is because the way in which each law firm applies AI technologies is a matter of commercial sensitivity. But another likely reason is that the technology is still in its infancy, and law firms are coming to grips with how to use the technology and manage the risks involved.

That said, consider the following quote from Eversheds’ CEO, Lee Ranson:

Our view is that the future is going to be a partnership between people and generative AI … So, you could ask the technology to tell you what the law is in relation to employment in France and England and how it differs. Draft a letter that starts a consultation around whatever your question might be, and that’s done in seconds. And then our lawyers will work with it to make sure these findings, and in some cases, hallucinations, are corrected or challenged.

From this, we can tease out the general promise underlying generative AI, which is that it ought to be able to produce in a few minutes the kinds of documents that might take a person several days to prepare. Rather than spend a day scouring Lexis+ for jurisdictional differences in employment law and then producing and refining a memo summarising your findings over the next day, applied generative AI can give you a draft version in a fraction of the time.

But more interesting is the way in which Ranson touches upon the risks of using AI, which includes relying on false information generated by the AI programme — “hallucinations” — based on previous inputs. This perhaps suggests that the working relationship between AI technology and practitioners may be akin to a senior fee-earner supervising a junior one.

Why is this important for your interviews?

The fact that Eversheds has come out full force in favour of adopting generative AI signals two things:
  1. Eversheds wants its clients and other key stakeholders to know that it intends to be an early adopter of the technology; and
  2. Other law firms are quietly pursuing implementation strategies of their own.
But when discussing AI in an interview setting, it is worth bearing in mind that for the technology to gain traction, it must produce long-term cost savings at minimal risk to the business.

The cost-saving premise in generative AI for a law firm is self-evident, at least in theory.

Therefore, when discussing generative AI in an interview setting, you may find it helpful to focus on the risks inherent to the technology, because these may pose the biggest hurdle to successful implementation.

At the end of the day, law as a profession is defined by its risk aversion and risk mitigation. Law firms have traditionally been slow to adopt new technologies because clients rely on law firms to minimise as far as possible any risk they face. Clients will not abide by law firms that prioritise cost savings through the use of AI at the expense of increased risk (nor will a law firm’s insurers).

Generative AI technologies must therefore produce content with a sufficient degree of accuracy such that it is cost-effective for fee-earners to review the work and correct for inaccuracies.