This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.
News

Technology

Sep. 17, 2025

State Court of Appeal issues 1st opinion sanctioning lawyer for AI 'hallucinations'

More than 200 documented instances of U.S. legal decisions have involved legal hallucinations to date, according to a database maintained by a Stanford Law School researcher. Last year Chief Justice John G. Roberts warned that "a shortcoming known as 'hallucination'" in AI tools can lead to citations to nonexistent cases.

State Court of Appeal issues 1st opinion sanctioning lawyer for AI 'hallucinations'
Presiding Justice Lee Smalley Edmon

A lawyer's failure to check the delusional ChatGPT output he folded into his pleadings has cost him $10,000 in sanctions and made him the target of a state appellate panel's withering opinion of first impression regarding hallucinations in legal drafting.

Plus, he lost the case, he's being reported to the State Bar, he must show the opinion to his client and he must certify to the court that he has done so.

"We conclude by noting that 'hallucination' is a particularly apt word to describe the darker consequences of AI," wrote Presiding Justice Lee Smalley Edmon of the 2nd District Court of Appeal's Division 3.

The attorney who ran afoul of the panel, Amir Mostafavi of Mostafavi Law Group APC in Los Angeles, on Monday declared himself "devastated" by the opinion. He explained that after he drafted his appeal for a client in an employment case, he ran it through ChatGPT as a final step. "I asked it to enhance the writing, the wording, to make it look more professionally written," he said. "I didn't know such a thing as hallucinations existed.

"When it came out, I looked at the format, not the content, and then I put it in PDF and filed it."

A message seeking comment from ChatGPT's parent, OpenAI Inc., received no response as of Tuesday.

Aside from the hallucinations, Edmon and colleagues noted, the appeal in question concerned an affirmance of summary judgment in an otherwise unremarkable case that would ordinarily never have led to a published opinion.

Wrote Edmon, "The AI tools [Mostafavi used] created fake legal authority--sometimes referred to as AI 'hallucinations'--that were undetected by plaintiff's counsel because he did not read the cases the AI tools cited."

And although the misuse of AI in legal drafting has gotten a lot of attention, Edmon added, no California court has addressed this issue. 

"We therefore publish this opinion as a warning." Noland v. Nazar et al., B331918 (Cal. App 2nd, filed June 25, 2025).

Lawyers have been warned before. "AI hallucinations are here to stay," emailed law professor Daniel E. Ho of Stanford Law School, the co-author of several studies on the reliability of leading AI legal research tools. 

More than 200 documented instances of U.S. legal decisions have involved legal hallucinations to date, according to a database maintained by a researcher Ho endorses.  

Added Ho: "Recent advances (such as retrieval augmented generation and agentic systems) can reduce hallucinations, but our research shows that widely marketed tools must always be checked by lawyers, as there is no known solution. That's fundamentally what the appeals court got right. "

The panel wrote that nearly all the quotations in Mostafavi's briefing were fabricated, even though most of the cases he cited do exist. Even so, "many of the cases plaintiff cites do not support the propositions for which they are cited or discuss other matters entirely, and a few of the cases do not exist at all."

One 2011 case, Schimmel v. Levin, 195 Cal. App. 4th 81, appeared in Mostafavi's briefing to discuss abuse of the summary judgment procedure and the legislative purpose behind Section 437c(f)(2) of the Civil Code, the panel said, adding, "In fact, Schimmel does not contain a single reference to either summary judgment or section 437c."

Describing the consequences of such fakery, the panel wrote, "AI hallucinates facts and law to an attorney, who takes them as real and repeats them to a court. This court detected (and rejected) these particular hallucinations. But there are many instances--hopefully not in a judicial setting--where hallucinations are circulated, believed, and become 'fact' and 'law' in some minds. We all must guard against those instances."

When it scrutinized Mostafavi's briefing, the appellate court spotted the fabrications. Then it sent him an order to show cause why he should not be sanctioned and scheduled a hearing on the issue.

Mostafavi's opposing counsel, Michael Yadegari, acknowledged that he was fooled by the bogus quotations from real cases that infected Mostafavi's appeal. Then the appellate panel sent him a copy of the order to show cause it had issued to Mostafavi

"I was too busy writing my own appeal briefs, so I never thought to check his cites," Yadegari said. "When I got the order, I was shocked. I was working my ass off, and he was using a robot to write shit."

Because Yadegari failed to detect the fabricated citations, the panel declined to order sanctions payable to him.

Although it's a case of first impression in California state court, it's hardly the first-time hallucinations have marred litigation locally. In a federal civil matter currently pending in the Central District, fake cases and fabricated legal language were found to have appeared in briefs filed by Hagens Berman Sobol Shapiro LLP. 

That firm last week said the briefs were drafted by co-counsel outside the firm. And Hagens also faulted opposing counsel at Skadden, Arps, Slate, Meagher & Flom LLP for delaying notification that they had discovered the fake material. N.Z. et al. v. Fenix International Ltd. et al., 8:24-cv-01655 (C.D. Cal., filed July 29, 2024).

In August, a federal judge in Arizona imposed severe sanctions on a lawyer who filed an opening brief replete with non-existent or inapposite or misquoted cases for a Social Security claimant. The brief's deficiencies were "consistent with artificial intelligence generated hallucinations," the judge wrote. Mavy v. Commissioner of Social Security Administration, 2:25-cv-00689 (D. Ariz., filed Feb. 28, 2025).

The judge removed the lawyer from the case, revoked her pro hac vice status, struck her opening brief, reported her to her bar association in Washington state, required her to write letters to three judges to whom she attributed fictitious cases and, going forward, ordered her to send a copy of his order to every judge presiding over any case in which she is counsel of record.

Last year Chief Justice John G. Roberts warned in his annual report that "a shortcoming known as 'hallucination'" in AI tools can lead to citations to nonexistent cases.

"Always a bad idea," the chief remarked dryly.

#387652

John Roemer

Daily Journal Staff Writer
johnroemer4@gmail.com

For reprint rights or to order a copy of your photo:

Email Jeremy_Ellis@dailyjournal.com for prices.
Direct dial: 213-229-5424

Send a letter to the editor:

Email: letters@dailyjournal.com