Ethics/Professional Responsibility
Jun. 6, 2025
Counselor at law: Preserving the human in legal practice
The California State Bar's AI scandal exposes a profound irony: As machines master the rules-based tasks measured by traditional bar exams, we're forced to confront what truly constitutes legal competence--the distinctly human judgment, ethical wisdom, and counseling abilities that no algorithm can replicate.






When AI can write bar exam questions, we have
to ask: What exactly are we testing? The California State Bar's recent
admission forces us to confront an uncomfortable truth about legal licensing: The
skills we measure are precisely those that machines now replicate best.
The bar exam's mixed legacy
The bar exam - created a century ago - continues to emphasize
memorization of black-letter law across numerous subjects, rapid application of
rules to hypothetical scenarios, and the ability to perform under extreme
pressure. These skills bear only passing resemblance to actual legal practice.
Research has failed to demonstrate strong correlations between
bar exam scores and subsequent professional competence. A 1980 California study
found that 40% of bar-passing attorneys lacked competence in practical tasks
like client interviews, brief writing, jury arguments and client letters. More recent
studies confirm the bar exam's weak predictive value for legal effectiveness.
Even research on GPT-4's bar exam performance noted that "the tasks involved on
the bar exam, particularly multiple-choice questions, do not reflect the tasks
of practicing lawyers" and thus mastery of those tasks doesn't necessarily
reflect mastery of the skills practicing lawyers need.
This critique becomes urgent in the AI era, when machines can
easily replicate the exam's core tasks - identifying relevant legal rules and
applying them to facts. When AI can pass the bar exam, what uniquely human
skills are we failing to measure?
The machines that cannot understand justice
The current bar exam treats legal competence like navigating
with an automated GPS: follow preset rules, make indicated turns, reach a
predetermined destination. But legal practice demands the wisdom of a local
guide who knows not just the official map but the unmarked scenic routes, the
neighborhoods to avoid after dark, and which shortcuts are actually
worth taking. AI masters the map. It cannot understand the territory in
human terms.
The Bar's AI controversy reveals a fundamental tension: Our bar
exam may favor skills that machines can replicate over those they cannot.
Legal understanding transcends pattern recognition. It requires
what Aristotle called phronesis - practical wisdom that considers not just which
rules apply but what outcome best serves justice. While bar exams test the
former, they barely touch the latter. And now, ironically, the very entity that
constructs these tests has demonstrated that rule application can be outsourced
to a non-lawyer AI system.
What we should actually test for: "Counselor
at Law"
The Bar's AI mishap provides a moment to reflect on what it
truly means to be a lawyer. The term "counselor" - historically rooted in
medieval England's Courts of Chancery - captures the essence of lawyering that
transcends mere knowledge of rules. It reminds us that attorneys are, first and
foremost, trusted advisors providing guidance through complex human challenges.
Years ago, as I took the California Bar's performance exam, this
distinction became viscerally clear. The sweat beading on my forehead came not
just from anxiety, but from the tangible parallels to actual legal work. The
rustle of papers as I sorted through the case file, the methodical highlighting
of key facts, the mental triage of urgent issues - these sensations mirrored my
supervised law school work in a way that sterile
multiple-choice questions never could. Unlike the exam's theoretical sections,
this performance test demanded the same counselor's judgment I'd developed on
real cases.
A skilled counselor functions more like a trusted family doctor
who's known you for decades - technically trained yet attuned to your history,
values, fears and unstated hopes. The California Bar's experiment accidentally
reveals this gulf between approaches. Machines can replicate expertise's
appearance but cannot embody the wisdom, empathy, and judgment that define the
counselor's role.
Encouragingly, the NextGen Bar Exam, launching in 2026,
acknowledges this understanding by shifting assessment toward the counselor
role. It tests integrated scenarios requiring students to demonstrate client
counseling, evidence evaluation, and complex problem-solving that mirrors
actual practice. This recognition of what truly matters must accelerate.
As we reconsider attorney licensing, we should prioritize
qualities that embody this counselor role. The American Bar Association's
Standard 303 broadly defines professional identity as focusing on "what it
means to be a lawyer and the special obligations lawyers have to their clients
and society." We should interpret this to prioritize the counseling and
advisory capabilities that technology cannot replicate.
Toward a more meaningful licensing model
What might a licensing system that truly evaluates these human
counselor capacities look like? Beyond enhanced performance tests, we could
incorporate standardized client interactions where candidates demonstrate
interviewing and counseling skills with trained actors playing clients - similar to medical licensing's OSCE (Objective Structured
Clinical Examination). We might develop portfolio assessments where candidates
demonstrate competence through actual legal work performed during law school
clinics or supervised practice. Some jurisdictions could adapt New Hampshire's
Daniel Webster Scholar program, which replaces the traditional bar exam with a
comprehensive two-year assessment during law school that includes client
counseling, negotiation, and trial practice evaluations.
Richard Susskind observes that the fundamental question isn't
whether machines can exhibit empathy or judgment, but rather "for what problems
are empathy, judgment, or creativity the solution?" This crystallizes our
challenge: testing for the distinct human counseling skills that define
exceptional lawyers.
The lesson of the Bar's AI scandal extends beyond transparency
concerns. It invites us to answer a more fundamental question: Does our
licensing system truly measure the counseling abilities that future clients will
need and future courtrooms will demand? The black box can
never replace the open book of judicial wisdom. So too in licensing, we must
ensure we're measuring what matters, not what machines can mimic. While we
debate how to improve a licensing system that AI can now penetrate, we are
forced to confront a profound truth: the very skills least measurable by our
current methods are those most indispensable to justice.
Disclaimer: The views expressed in this article are solely
those of the author in their personal capacity and do not reflect the official
position of the California Court of Appeal, Second District, or the Judicial
Branch of California. This article is intended solely to contribute to
scholarly dialogue and does not represent judicial policy, administrative
guidance, or any indication of how the author would approach these issues in
any legal proceeding.
Submit your own column for publication to Diana Bosetti
For reprint rights or to order a copy of your photo:
Email
Jeremy_Ellis@dailyjournal.com
for prices.
Direct dial: 213-229-5424
Send a letter to the editor:
Email: letters@dailyjournal.com