This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Technology,
Ethics/Professional Responsibility

Feb. 10, 2026

California AI rules for lawyers and arbitrators pass senate, head to assembly

California's Senate passed SB 574, a bill establishing AI guardrails for lawyers and arbitrators that codifies confidentiality duties, nondelegable citation verification and strict limits on AI's role in decision-making, sending it to the Assembly for consideration.

Mark S. Adams

Partner
Blank Rome

Business Litigation group and Hospitality practice

See more...

Sharon R. Klein

Phone: (949) 812-6010

Email: sharon.klein@blankrome.com

Sharon R. Klein is a California-based member of Blank Rome's Privacy, Security & Data Protection team. She can be reached at sharon.klein@blankrome.com.

See more...

Alex Nisenbaum

Email: alex.nisenbaum@blankrome.com

Alex C. Nisenbaum is a California-based member of Blank Rome's Privacy, Security & Data Protection team.

See more...

Joseph J. Mellema

Partner
Blank Rome

Business Litigation group and Hospitality practice

See more...

California AI rules for lawyers and arbitrators pass senate, head to assembly
Shutterstock

Generative artificial intelligence (AI) is now routine in law practice: Lawyers use it to draft correspondence, summarize discovery, outline deposition testimony and accelerate research. Public AI tools can expose confidential information if lawyers paste client materials into prompts, and large language models can "hallucinate," producing plausible-sounding facts, quotations or citations that do not exist. AI has enticed otherwise prudent practitioners to submit materials to courts without vetting the veracity of that information, resulting in sanctions.

California's Legislature is moving to address those risks with statewide rules. Senate Bill 574, authored by Senator Tom Umberg, passed the Senate 39-0 on Jan. 29 and has been sent to the Assembly. When asked about the bill's aim, Umberg said, "SB 574 makes clear that while technology may support legal work, responsibility and judgment remain firmly with the professional. This bill helps maintain public confidence in the fairness and credibility of California's justice system." Notably, materials from the California Senate Judiciary Committee reported no formal opposition and list privacy advocates among supporters, underscoring how mainstream these guardrails have become for practitioners and neutrals alike. Many courts have issued decisions and fines for nonexistent or inaccurate citations (e.g., Noland v. Land of the Free, 114 Cal. App. 5th (2025)). If it passes the Assembly and signed into law, California will be one of the first states to enact a statute to regulate the lawyer's and arbitrator's use of generative AI. In late 2025, California became one of the first court systems in the United States to adopt court rules that provide a similar framework to regulate generative AI's use by the state's judicial branch.

Consistent with the "Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law" previously issued by the State Bar of California Standing Committee on Professional Responsibility and Conduct in November of 2023, Senate Bill 574 does not ban generative AI. It codifies accountability. It does three things. First, it adds an explicit attorney duty aimed at protecting confidentiality and reducing "AI hallucinations." Second, it tightens California's sanctions statute to make citation verification nondelegable no matter who (or what) drafted the filing. Third, it draws a bright line for arbitrators: Generative AI cannot be used as a decisional surrogate, and neutrals must avoid AI use that could influence procedural or substantive outcomes.

Lawyers: Confidentiality

Senate Bill 574 would add Business and Professions Code Section 6068.1, framing AI governance as an "attorney duty." It requires lawyers to ensure that "confidential" or "nonpublic" information is not entered into a "public" generative AI system.

The bill also supplies concrete examples of "personal identifying information," sweeping well beyond obvious identifiers. The list includes items such as dates of birth, Social Security numbers, driver's license numbers, addresses, and phone numbers for parties/witnesses/victims, and sensitive categories like medical, psychiatric and financial information along with anything sealed by court order or designated confidential by rule or statute. The takeaway is that "I only used facts from the file" is not a safe harbor, because case files routinely include protected data.

Senate Bill 574 further requires attorneys to take "reasonable steps" to verify the accuracy of AI-generated material (including work prepared by others on the lawyer's behalf), correct erroneous or hallucinated output, and remove biased, offensive or harmful content. The proposed duty requires attorneys to ensure their use of generative AI does not unlawfully discriminate or disparately impact individuals or communities based on a broad set of protected classifications (and other classifications protected by federal or state law).

Finally, the statute includes a disclosure "nudge": The attorney must consider whether to disclose AI use if it was used to create content provided to the public. This could include disclosure to a client if a lawyer intends to use AI in the representation.

Court filings: Cite-checking is nondelegable

Senate Bill 574 would amend Code of Civil Procedure Section 128.7--California's Rule 11 analogue--to add an explicit prohibition, namely, that court filings cannot contain citations that the submitting attorney has not "personally read and verified," including citations supplied by generative AI.

This is a targeted response to a now-familiar scenario: AI-assisted briefs that cite cases that do not exist or misstate holdings with high confidence (yet utterly wrong). The Legislature's move is not an "AI ban" on legal drafting, rather, it is a clear principle-based compliance rule tied to a sanctions framework. If you sign it, you own it--down to the citations.

Arbitrators: No "AI arbitrator," and avoid AI influence on decisions

The arbitration provisions are the most categorical. Senate Bill 574 would add Code of Civil Procedure Section 1282.1, which would prohibit arbitrators from delegating any part of the decision-making process to generative AI, and would direct arbitrators to avoid delegating tasks to AI if the use could influence procedural or substantive decisions.

In addition, the bill instructs arbitrators not to rely on AI-generated information outside the record without disclosure and, "as far as practical," give an opportunity to the parties to comment. It also provides that if an AI tool cannot cite sources that can be independently verified, the arbitrator may not assume those sources exist or are accurately characterized. The arbitrator remains responsible for "all aspects" of the award.

This is a direct policy statement: efficiency cannot come at the expense of due process or the parties' expectation that a neutral human is doing the adjudicating. Legislative analysis frames Senate Bill 574 as a logical extension of the judiciary's own generative-AI guardrails for adjudicative tasks (California Standards of Judicial Administration, Standard 10.80).

Just as importantly, the bill reflects a broader shift: Professional responsibility rules that already exist (confidentiality, competence, supervision and candor) are being translated into AI-specific statutory requirements with clearer enforcement hooks. This trend to manage lawyers' use of AI will continue in California and other states and be modified as the technology advances ahead of the law.

Practical points

• Law firms, law departments and arbitrators must have AI policies reflecting their ethical and statutory responsibilities.

• Lawyers and arbitrators must undergo training on the responsible use of AI.

Engagement letters should be revised to ensure clients are fully informed regarding the lawyers' use of AI and may also include provisions outlining the clients' responsibility to verify outcomes generated by AI.

• Monitor developments in AI-related guidance and best practices as they evolve within all areas of the practice of law.

#389696


Submit your own column for publication to Diana Bosetti


For reprint rights or to order a copy of your photo:

Email Jeremy_Ellis@dailyjournal.com for prices.
Direct dial: 213-229-5424

Send a letter to the editor:

Email: letters@dailyjournal.com