Full Disclosure:

Could AI change the law firm partnership track?

By Jaysen Sutton
📩 Sign up here to receive a new edition of 'Full Disclosure' directly into your inbox, every week.

Hi Reader 👋🏽,
Screenshot 2024-02-07 at 16.06.11.png


It’s your first day in a new job working for a tech company. You have some basic questions already, but you’re a little embarrassed to ask: How much leave did they say you get? How does the bonus system work? Where does the head of department sit?

You remember that your supervisor told you to head to the company’s new portal GENERATE with any questions you have. You head onto the portal and a box shows up to type your questions. By some miracle, the computer system answers your questions. You know exactly how much leave you get, what bonus to expect and how to avoid getting lost on your first day.

This is the exciting domain of generative AI, which involves training a computer programme by supplying it with huge amounts of information until it starts generating new information. Generative AI is not exactly new, but what has changed is the sophistication of the models and the mass adoption of OpenAI’s ChatGPT (it took just two months to reach 100 million monthly users).

What is more exciting is the potential to train the system with company data, promising to increase employee productivity and help companies to make better decisions. This is why the world’s biggest companies are racing to build their own AI platforms. Last week, according to a memo seen by the Financial Times, BlackRock, the largest manager of other people’s money, is rolling out its own generative AI for employees and clients.

How does it impact law firms?

Allen & Overy was incredibly early in its use of GPT4, investing millions to carve itself as a leader in the use of generative AI. According to The Lawyer, the law firm was one of a handful given early access to use the beta version of GPT4, before it was announced that Allen & Overy had partnered with legal AI start-up Harvey AI in February 2023.

Other law firms have since taken their own measures. Macfarlanes announced its partnership with Harvey AI in September 2023, Simmons & Simmons just launched its internal generative AI tool Percy, while Travers Smith has developed its own system Analyse.

What does it mean for your law firm interviews?

Right now, the attention is on the promise of AI in the legal profession. The starting point is that generative AI tools can be fed relevant information (think contracts, statute and agreements), which will support lawyers in analysing the information based on the prompts they provide.

Long-term, this could mean AI will take on the administrative workload from junior lawyers. The more AI can read, update and analyse documents, conduct preliminary research and carry out routine tasks, the more trainee solicitors can concentrate on the elements that AI can’t do. Training generative AI with an accurate understanding of English law could even lead it to carry out early drafts of contracts and emails to be sent to clients.

With this in mind, it was interesting to read this piece from Bloomberg last week:


Consulting giants and law firms are looking to artificial intelligence to speed up the time it takes junior staffers to make it to the prestigious partner level as the technology eliminates vast swaths of the repetitive, time-consuming tasks that typically filled up their first few years on the job.

At KPMG, for instance, freshly-minted graduates are now doing tax work that was previously reserved for staff with at least three years of experience. Over at PwC, junior staffers are spending more time pitching clients rather than the hours they used to spend prepping meeting documents. And at Macfarlanes LLP, junior lawyers are interpreting complex contracts that their more—experienced peers used to have to handle.

With the rate of AI adoption, it’s unsurprising that governments have different views over its regulation.

Over the weekend, the EU announced its AI Act, the first continent to announce comprehensive rules to regulate AI.

The rules are based on the perceived risk of AI systems, with a ban on behaviourally manipulative tools, while 'high-risk systems' like ChatGPT must comply with transparency requirements, such as disclosing the fact that their content was created by AI.

Fines for failing to comply with the requirements range from 7.5 to 35 million euros or 1.5 to 7% of turnover.




Have any thoughts? I'd love to hear your perspective below!

❓Contact [email protected] with any queries.