Technology,
Ethics/Professional Responsibility
Aug. 8, 2025
AI safety for lawyers: It's not how the engine works, it's how you drive the car
Recent sanctions show lawyers don't need to master AI's technical architecture -- they need to follow simple operational safety rules, just like driving a car.






When a lawyer drives
to court, nobody expects them to understand how fuel injection works. Nobody
cares if they can explain the mechanics of anti-lock brakes or the physics of
combustion engines. But everyone absolutely expects them to check their mirrors,
wear a seatbelt, and drive defensively.
The same principle
applies to artificial intelligence in legal practice.
The mechanics don't
matter (much)
Many articles about
AI in law focus on the fascinating technical details -- how large language models
map words in thousand-dimensional spaces, how they use pattern recognition to predict
text and how they're essentially sophisticated autocomplete systems. Interesting?
Yes. Relevant to daily practice? Rarely.
Consider this:
multiple lawyers have been sanctioned for submitting AI-generated fake cases to
courts. Last month, an attorney used ChatGPT to "enhance" an
already-written brief, but never read the final version, missing the fabricated
cases ChatGPT had added. None of these sanctions occurred because the lawyer
failed to understand language models or neural networks. They happened because
lawyers violated basic operational safety rules.
Just as driving
safety isn't about understanding engineering, AI safety in law isn't about
understanding algorithms. It's about following three simple non-negotiable
protocols that prevent catastrophic mistakes.
1. Check your mirrors:
Verify every citation
Driving rule: Check your mirrors before changing
lanes. You need to know what's actually around you,
not what you assume is there.
AI equivalent: Verify every legal citation AI provides.
Every case, statute and quote must be independently confirmed.
AI doesn't lie
maliciously -- it generates plausible-sounding content based on patterns. Just
as we check mirrors to see what's around us, verification reveals what AI actually found.
This isn't paranoia;
it's professional responsibility. Would you change lanes just because your
passenger said it looked clear? Then don't submit a brief without checking
citations just because AI said they exist.
2. Wear your seatbelt:
Write careful prompts
Driving rule: Always wear a seatbelt. It protects you
when things go wrong, which they inevitably will.
AI equivalent: Write prompts that actively guard against
bias and sycophancy.
AI systems are
designed to be helpful and agreeable. Ask them "Is my argument
strong?" and they'll often say yes, regardless of your argument's actual
merit. Ask them to find supporting cases for a weak position, and they'll try
their best to comply -- even if it means fabricating citations.
Just as seatbelts
protect against collisions, careful prompts protect against AI's eagerness to
please. Instead of asking "How can I win this motion?" ask "What
are the three strongest counterarguments to this position?" Instead of
"Find cases supporting my theory," try "What would opposing
counsel argue?"
Good prompts assume
AI will try to please you, so get it to challenge your work, not validate it.
3. Drive defensively:
Treat AI like a first-year associate
Driving rule: Drive defensively. Expect mistakes from
others and plan accordingly.
AI equivalent: Treat AI output like a first draft from
a smart but inexperienced first-year associate or extern.
A new lawyer is
smart, well-educated and eager to help. They might produce impressive work
quickly. But they lack judgment, make confident assertions about things they
don't fully understand, and may confuse legal-sounding language with legal
reasoning.
You wouldn't file
that new lawyer's first draft without review and revision. You'd revise,
question and test their work. Do the same with AI. Use it to accelerate
drafting or generate ideas but apply the same editorial scrutiny you'd use with
any junior colleague's work. Ask: Is this reasoning sound? Are these citations
accurate? Would I stake my reputation on this?
The real danger isn't
technical ignorance
The lawyers facing
sanctions didn't fail because they couldn't explain how neural networks
function. They failed because they treated AI like a trusted senior colleague,
not an overeager assistant. These attorneys essentially handed AI the keys and
never checked where it drove -- discovering too late that their
"enhanced" briefs had veered into fabrication.
These aren't
technical failures requiring deep expertise to solve. They're professional
discipline failures.
This is professional
responsibility, not technology law
These aren't
technology rules -- they're California Rules of Professional Conduct
requirements that happen to involve technology. The State Bar of California's
November 2023 AI guidance makes clear that existing professional responsibility
obligations apply fully to AI use, including Rules 1.1 (competence), 1.3
(diligence), 1.6 (confidentiality), and 3.3 (candor to tribunals).
Rule 1.1 specifically
requires lawyers to "keep abreast of the changes in the law and its
practice, including the benefits and risks associated with relevant
technology." When you submit a brief with fabricated citations, State Bar
disciplinary authorities won't ask whether you understand neural networks.
They'll ask whether you exercised the competence and diligence required under
Rules 1.1 and 1.3.
California's official
guidance is explicit: "A lawyer must critically review, validate and
correct both the input and the output of generative AI" and "A
lawyer's professional judgment cannot be delegated to generative AI." Technology
doesn't change these fundamental obligations. The same rules apply to any tool
lawyers use.
The State Bar
emphasizes that lawyers must understand AI's limitations and avoid overreliance
on these tools while maintaining their professional judgment. This isn't about
becoming a tech expert. It's about applying the same professional standards
you'd use with any research tool that could impact your clients' interests.
The Rules of
Professional Conduct already provide the framework. The challenge isn't
learning new rules -- it's following existing ones when using AI.
Simple rules for complex
technology
The beauty of the
driving metaphor is its universality. Whether you're driving a 1995 Honda Civic
or a 2025 Tesla with full self-driving capability, the basic safety rules
remain the same: check your surroundings, protect yourself against foreseeable
problems and stay alert for the unexpected.
Similarly, whether
you're using ChatGPT, Claude, or whatever AI system comes next, the operational
safety rules remain constant:
1. Always verify
citations (check your mirrors)
2. Write prompts that
resist bias and sycophancy (wear your seatbelt)
3. Treat all AI
output as a first draft requiring human judgment (drive defensively)
None of these require
knowing how AI works. They just require knowing how to work safely with it.
Focus on what actually matters
When someone wants to
explain AI safety through technical architecture, ask this: How many sanctioned
lawyers would have been saved by understanding large language models or neural
networks?
The answer: zero.
How many would have
been saved by checking their citations?
Every single one.
AI safety in law
isn't about the engine under the hood. It's about how we handle the wheel.
Competent, ethical driving is what keeps us -- and our clients -- safe.
Disclaimer: The
views expressed in this article are solely those of the author in their
personal capacity and do not reflect the official position of the California
Court of Appeal, Second District, or the Judicial Branch of California. This
article is intended to contribute to scholarly dialogue and does not represent
judicial policy or administrative guidance.
Submit your own column for publication to Diana Bosetti
For reprint rights or to order a copy of your photo:
Email
Jeremy_Ellis@dailyjournal.com
for prices.
Direct dial: 213-229-5424
Send a letter to the editor:
Email: letters@dailyjournal.com