Skip to content

AI is making lawyers more productive and justice more accessible, Northeastern legal expert says 

In “AI and the Law: Assistant or Assassin,” associate professor Ursula Smartt writes that “some of the benefits of gen-AI are starting to emerge” for the legal profession.

Lawyers walking into the Royal Court of Justice wearing wigs.
Generative AI has the ability to ‘take the dross out’ of being a lawyer, says associate professor Ursula Smartt (Photo by Alberto Pezzali/NurPhoto via Getty Images)

LONDON — Generative artificial intelligence is creating new legal work — from courtroom copyright disputes to media contracts — while also reshaping how law is practiced.

But it is also removing the tedious and low-value aspects of legal work, argues Ursula Smartt, associate professor of law at Northeastern University in London.

In her latest peer-reviewed paper, “AI and the Law: Assistant or Assassin,” published in European Intellectual Property Review, Smartt writes that “some of the benefits of gen-AI are starting to emerge” for the legal profession.

The emerging technology is acting as a “little helper,” she says, with lawyers using it for day-to-day tasks such as scheduling meetings, summarizing emails and legal documents, or providing first drafts of oral arguments. 

“Rather than replacing humans, AI is being used to augment them — performing as an assistant rather than an assassin,” she writes. “More recently, law firms have started to realise that they can automate some of the tedious lawyering work and admin that does not add value but does consume precious time.”

As well as potentially fueling growth in the sector, AI has the ability to “potentially increase access to justice and make it cheaper to solve certain types of legal problems, particularly in the area of litigation and court services,” Smartt concludes.

That’s not to say the legal industry won’t see casualties as a result of generative AI, she warns. Smartt tells Northeastern Global News that those at risk are the professionals who refuse to adopt AI in their work by learning how to write time-saving prompts.

Portrait of Ursula Smartt, who is standing on the stairs of a building in London.
Ursula Smartt’s new paper argues that a legal mind is still needed in order to check what generative AI produces. Photo by Suzanne Plunkett for Northeastern University

“Where the assassination will happen is for the type of lawyer who is not tech savvy, who hasn’t gone with the times, who doesn’t engage with AI,” she explains.

“AI is here to stay. I say to my students when they apply for jobs or promotions, show that you can use AI, show that you can use it as an assistant.

“Because the assassination of lawyers will be at the top end — those potentially with the big salaries but who cannot use technology and who are unwilling to engage with AI. Because it is moving on and it is moving on really fast.”

Another area where generative AI has posed a challenge is around client confidentiality, says Smartt, who writes the leading textbook “Media and Entertainment Law,” published by Routledge. It is a problem the U.K.’s legal sector, particularly the top five British firms — known as the “magic circle” —  has been attempting to address recently.

“In the last two months,” Smartt says, “staff at all the magic circle law firms have been told they must not use any open AI software such as ChatGPT, Google’s Gemini or Microsoft Copilot at work and for work purposes.

“Lawyers had been found to be feeding client data into them to ease their workload. There is no doubt that AI facilitates quicker, easier formulation for drafting documents or drafting court papers if you’re dealing in litigation. 

“But increasingly, and in America as well, the lawyers were putting in client data. Now that clearly breaches data protection laws in this country. The problem is, with most of these AI software programs, their developers are based in the United States, and the U.S. privacy and data protection is not as strong as here, with the European Union and U.K.’s General Data Protection Regulation rules.”

Legal firms have addressed this by creating in-house generative AI for their staff to use, Smartt explains, or by buying off-the-shelf products that are specifically designed for the industry.

Magic circle firms A&O Shearman and Clifford Chance, along with so-called “silver circle” firm Mischcon de Reya, have gone down the route of having “developed or developing their own” AI software, she says. 

Systems already used by law firms for case and practice management — like Lexis, Kira and Harvey — are having AI built into them to make them more efficient, adds Smartt.

In her seven-page paper, Smartt argues that, while AI is being adopted in courtrooms across the world — in India, it is used to translate documents written in regional languages into English, while in China, it is used for repetitive tasks — it does not replace the need for a legal mind.

In response to rising concerns, most U.S. states now require attorneys to file a certificate affirming that any generative AI use in legal filings was reviewed for accuracy by a human.

Smartt says this is partly to guard against generative AI’s proneness to “making things up and ‘hallucinating’.”

“These AI models are an assistance for lawyers when it comes to contracts, when it comes to litigation and writing submissions to court,” Smartt tells NGN. “They are there and I’ve used them — but you’ve still got to be a lawyer. 

“You need to have the legal knowledge to check the answer, to check what ChatGPT or whatever has come up with. You still need to be a lawyer in order to certify that what the AI beast has come up with is right.”