homehome Home chatchat Notifications


Lawyers are already citing fake, AI-generated cases and it's becoming a problem

Just in case you're wondering how society is dealing with AI.

Mihai Andrei
June 10, 2025 @ 5:51 pm

share Share

High Court Justice Victoria Sharp must have had a very strange day. While reviewing case citations presented by lawyers, she realized something shocking: many of them simply didn’t exist.

It seems the lawyers had turned to generative AI for research, and, as often happens, the AI hallucinated and fabricated case law. The High Court, which oversees high-profile civil cases in the UK, warned that citing non-existent cases could lead to contempt of court or even criminal charges. But that’s unlikely to stop AI’s growing presence in legal practice.

Phantom cases

It’s easy to point fingers at students using AI to cheat on homework — but this trend has already seeped into professional legal circles. In a UK tax tribunal in 2023, an appellant provided nine bogus cases as “precedents”. When it was revealed that the cases weren’t real, she admitted it was “possible” that she used ChatGPT. In another 2023 case in New York, a court descended into chaos when a lawyer was challenged to produce the fictitious cases he cited.

Fast-forward to a recent £90 million ($120 million) lawsuit involving Qatar National Bank, where 45 case-law citations were made and 18 out of them turned out to be fake. Several others featured fabricated quotes. The claimant admitted to using publicly available AI tools for legal drafting.

In yet another case, a legal center challenged a London borough over its failure to provide temporary housing. The lawyer cited phantom cases five times and couldn’t explain why no one could find them. While he didn’t admit to using AI, he claimed they may have appeared via Google or Safari searches, both of which now feature AI-generated content.

Needless to say, the judges were not amused.

“There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused,” Judge Victoria Sharp said in a written ruling.

Sharp admitted that AI has great potential that can be useful in the court of law, but emphasized that there is a great need for tight oversight.

“Artificial intelligence is a powerful technology. It can be a useful tool in litigation, both civil and criminal,” the ruling reads. “This comes with an important proviso, however. Artificial intelligence is a tool that carries with it risks as well as opportunities. Its use must take place, therefore, with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained.”

“In those circumstances, practical and effective measures must now be taken by those within the legal profession with individual leadership responsibilities … and by those with the responsibility for regulating the provision of legal services.”

What happens now?

Sharp issued a “regulatory ruling” regarding the use of AI. While it is not legislation (ie a new law passed by Parliament), a regulatory ruling carries authoritative weight within the legal profession and sets up a framework on how to deal with various situations. If lawyers continue to quote hallucinated cases, consequences could include public reprimands, cost penalties, wasted cost orders, contempt proceedings, and even police referrals.

“Where those duties are not complied with, the court’s powers include public admonition of the lawyer, the imposition of a costs order, the imposition of a wasted costs order, striking out a case, referral to a regulator, the initiation of contempt proceedings, and referral to the police.”

But like so many tech-related issues, this is yet another case of technology moving faster than the rules. AI tools like ChatGPT are already being used to draft legal arguments, but there’s no global professional standard for how to use them safely and responsibly.

AI is evolving fast, but the rules, education, and oversight needed to use it responsibly are lagging behind. In this case, it was a blatant misuse, but subtler hallucinations may already be going undetected. If accuracy and factuality can’t even be enforced in the court of law, this doesn’t bode well for the rest of society.

share Share

The Universe’s First “Little Red Dots” May Be a New Kind of Star With a Black Hole Inside

Mysterious red dots may be a peculiar cosmic hybrid between a star and a black hole.

Peacock Feathers Can Turn Into Biological Lasers and Scientists Are Amazed

Peacock tail feathers infused with dye emit laser light under pulsed illumination.

Helsinki went a full year without a traffic death. How did they do it?

Nordic capitals keep showing how we can eliminate traffic fatalities.

Scientists Find Hidden Clues in The Alexander Mosaic. Its 2 Million Tiny Stones Came From All Over the Ancient World

One of the most famous artworks of the ancient world reads almost like a map of the Roman Empire's power.

Ancient bling: Romans May Have Worn a 450-Million-Year-Old Sea Fossil as a Pendant

Before fossils were science, they were symbols of magic, mystery, and power.

This AI Therapy App Told a Suicidal User How to Die While Trying to Mimic Empathy

You really shouldn't use a chatbot for therapy.

This New Coating Repels Oil Like Teflon Without the Nasty PFAs

An ultra-thin coating mimics Teflon’s performance—minus most of its toxicity.

Why You Should Stop Using Scented Candles—For Good

They're seriously not good for you.

People in Thailand were chewing psychoactive nuts 4,000 years ago. It's in their teeth

The teeth Chico, they never lie.

To Fight Invasive Pythons in the Everglades Scientists Turned to Robot Rabbits

Scientists are unleashing robo-rabbits to trick and trap giant invasive snakes