Technology,
Ethics/Professional Responsibility
Mar. 4, 2026
When AI does the lawyering and ethics take a back seat
Generative AI may speed up legal work under tight deadlines, but overreliance risks creating hallucinated case law, professional sanctions and ethical pitfalls that threaten both attorneys and clients.
Anita Taff-Rice
Founder
iCommLaw
Technology and telecommunications
1547 Palos Verdes Mall # 298
Walnut Creek , CA 94597-2228
Phone: (415) 699-7885
Email: anita@icommlaw.com
iCommLaw(r) is a Bay Area firm specializing in technology, telecommunications and cybersecurity matters.
"We've got a long way to go and a short time to get
there," a lyric from the 1977 comedy Smokey and the Bandit perfectly captures
how many attorneys feel every day. With the deadline for an important legal
brief fast approaching, the workload far exceeds the time available, and no one
on the litigation team has time to help. What to do?
Increasingly, attorneys are turning to Generative AI as a
substitute rather than a tool. After all, AI seems authoritative,
coherent and confident. But mostly, it's fast. So, the attorney asks AI to
provide a summary of the five best cases on a legal issue and AI delivers a
polished paragraph of cases with pinpoint cites and quotes in a compelling format
and under a tight deadline.
The attorney turns the AI-generated summary into a
paragraph for the brief and files it with the court without verifying that the cited
authorities exist and are still good law, that propositions match holdings, or that
quotations are accurate. Blind reliance on AI not only causes embarrassment and
sanctions but can also foster dependency or exacerbate mental health issues.
A group of researchers recently documented real examples
of people who began to experience mental health issues after intensive use of
AI systems. "Delusions by design? How everyday AIs might be fuelling [sic]
psychosis (and what can be done about it)," Hamilton Morrin et al. The
researchers found that AI's ability to mimic human conversation may cause the
user to form an attachment to an apparently sentient AI entity. The researchers
found a common pattern. AI is used for assistance with everyday tasks, which
builds trust and familiarity with the system. Because AI systems are designed
to maximize engagement with the user, it may create a "slippery slope" in which
the user becomes increasingly dependent on the AI system and more unmoored from
"consensus reality."
It's easy to see how a harried attorney might rely too
much on an AI system that reinforces his or her theory of a case or that
quickly provides cases that open a previously unconsidered line of attack or
defense --if the cases are real and still good law. The appeal of using AI is
that the platforms are designed to maximize the chance of giving an answer,
which is welcome if an attorney is up against an important deadline. But the
motivation to come up with an answer, any answer, means that the AI platform
may make up an answer that seems plausible rather than admit it doesn't know
something.
This is not a hypothetical problem. Examples of attorneys
being sanctioned for AI hallucinations in court briefs are on the rise. Last
fall, a California appellate court noted that in the last two years, many
courts have confronted briefs populated with fraudulent legal citations
resulting from attorneys' reliance on AI. Noland v. Land of the Free, L.P. (2025),
114 Cal. App. 5th 426, 443-44. "The appearance of hallucinated citations in
briefs generated from AI is no longer in its nascent stage. Regrettably, the
number and regularity with which courts have been faced with hallucinations in
court filings continues to rise." (Id.) The court lamented that "the
problem of AI hallucinations is getting worse, not better, noting that OpenAI's
newest models hallucinated "30-50% of the time, according to company tests." (Noland,
citing (Murray, "Why AI 'Hallucinations' Are Worse Than Ever," Forbes.com (May
6, 2025)).
The Noland court stated that "hallucination" is a
particularly apt word to describe "the darker consequences of AI" in which there
are many instances ... where hallucinations are circulated, believed, and become 'fact'
and 'law' in some minds." (Id.) The court noted that submitting legal
documents with fake cites "is entirely preventable by competent counsel who do
their jobs properly and competently." (Id.) In Noland, the
attorney filed an appeal that had numerous fake cites,
but once discovered by the court, he claimed he was unaware that AI platforms
could fabricate legal authority and had no intention of misleading the court. The
court noted the numerous articles and other court decisions discussing AI
hallucinations and sanctioned the attorney $10,000.
Becoming too dependent on Generative AI is risky. What
happens if the platform crashes or a communications outage prevents an attorney
from accessing the AI platform to complete a pleading that is now so late there
is no other option than to use AI? Copying and pasting AI cites or passages
into a pleading raises a significant likelihood of professional embarrassment
(at the very least), but also malpractice claims from affected clients or
sanctions.
California's sanctions regime for improper filings
reflects a policy judgment that courts must be protected from irresponsible
paper practice. Code of Civil Procedure section 128.7 authorizes trial courts
to impose sanctions to check abuses in the filing of pleadings, petitions,
notices of motion and similar papers. Section 128.7 contemplates sanctions
against attorneys and law firms responsible for the violation and provides
that, absent exceptional circumstances, a law firm is jointly responsible for
violations committed by its lawyers and employees. That provision creates
special risk for firms where AI use is informal, untrained or unsupervised
within a firm.
Attorneys need to be mindful that AI is a powerful tool,
not a substitute for diligent legal work. The fact that many attorneys feel so
pressed that they knowingly copy someone else's work (whether produced by AI or
other attorneys) signal problems within the profession that need to be
addressed.
Submit your own column for publication to Diana Bosetti
For reprint rights or to order a copy of your photo:
Email
Jeremy_Ellis@dailyjournal.com
for prices.
Direct dial: 213-229-5424
Send a letter to the editor:
Email: letters@dailyjournal.com