This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Technology

Mar. 9, 2026

Legal AI in Japan: ambition, caution and the permission paradox - 1st in a 3-part series

Legal AI in Japan: ambition, caution and the permission paradox - 1st in a 3-part series
Shutterstock

As part of the Daily Journal Series on how legal professionals are using AI this is the first of three articles based on interviews personally conducted in Tokyo by Howard Miller, past President of the State Bar of California and Daily Journal Contributing Editor, and Marc Miller, Ralph W. Bilby Professor of Law and Dean Emeritus, James E. Rogers University of Arizona College of Law

The interviews were with Atsushi Okada, Mori Hamada & Matsumoto • Professor Ichiro Kobayashi • Tomohiro Okumura, LegalOn Technologies • Takahiro Homma, Renesas Electronics • and Toshimi Itakura, Sojitz Corporation, all of whom are further identified in the articles. All interviews were recorded, transcribed, and sent for review by participants prior to publication.

Forgiveness or Permission?

The regulatory fault line shaping Japan's legal AI--and what it reveals about a profession caught between ambition and caution

In our interviews with lawyers in Tokyo, we often asked about a phrase familiar in Silicon Valley, "better to ask forgiveness than permission." The response, each time, was immediate recognition--and something close to discomfort.

Japan's legal profession is navigating the AI revolution from a fundamentally different cultural and regulatory starting point than its California counterparts. Across five interviews conducted in Tokyo in February, a consistent picture emerged: Japanese legal professionals understand what AI can do, are using it in growing numbers, and are genuinely excited by its potential--and yet are constrained by a web of regulatory caution, institutional risk aversion and structural rules that have no direct parallel in the United States.

"People now understand we cannot live without AI technology," said Atsushi Okada, a partner at Mori Hamada & Matsumoto, one of Japan's largest law firms, and a member of a government committee charged with shaping Japan's national AI policy. "The balance is important."

That balance, however, is struck very differently here than in Los Angeles or San Francisco.

The Permission Infrastructure

At Sojitz Corporation, one of Japan's major general trading houses, General Manager of Legal Toshimi Itakura oversees a team of roughly 70 legal professionals covering seven business divisions with operations spanning six continents. Her department uses an internal AI chat platform, various legal tech tools from approved vendors and AI chatbots for compliance guidance. Every tool in that list has gone through a vetting process managed by the IT department, evaluated for cybersecurity, data reliability and regulatory compliance. No member in the department can simply decide to use a new AI tool on their own.

"If we wanted to use it, you have to go through the approval. What we call shadow generative AI--that's not approved."  -- Toshimi Itakura, Sojitz Corporation

This pattern--extensive internal approval before any AI tool is authorized--was consistent across in-house settings. At Renesas Electronics, where General Counsel Takahiro Homma oversees 120 legal professionals globally, the primary AI tool is Microsoft Copilot, chosen in part because it was already woven into the company's enterprise Microsoft infrastructure and met security standards without requiring a separate approval architecture. "We spent several months, or even more than one year, to explore and choose the best AI contribution," Homma said. "We found this Copilot button everywhere in Teams, in Office--why don't we explore the usage of this function?"

The caution is not irrational. Japanese corporations are acutely sensitive to data privacy and cybersecurity risks, and the absence of comprehensive AI-specific legislation--Japan has issued guidance but has not yet enacted anything comparable to the EU AI Act--means companies are effectively setting their own standards. For large global companies like Renesas, compliance with EU data protection rules serves as a proxy for global compliance. "If you comply with EU, then you're safe almost everywhere, at this moment" Homma said.

The UPL Problem

But the more structural constraint--the one with no direct analogue in California practice--is Japan's unauthorized practice of law framework. Professor Ichiro Kobayashi, a law professor and head of the Global Legal Innovation Education and Research Center at his university, has written extensively on the peculiarities of Japan's UPL rules, and his analysis helps explain why the legal technology market in Japan looks so different from the United States.

In the U.S., the battle over who can provide legal services was largely fought--and largely settled--with the rise of LegalZoom. Bar associations resisted, courts weighed in, and eventually the market accepted a model in which a legal tech platform could provide document services, recommend lawyers, and operate as long as it was careful not to cross the line into giving specific legal advice.

In Japan, that settlement has not happened--and the rules make it harder to imagine. Under Japan's UPL framework, a legal tech company cannot simply provide legal services to consumers. But it also cannot refer consumers to outside lawyers and receive any benefit for doing so--even indirectly, even without compensation. "Legal tech company is not able to introduce outside lawyers to the consumer," Kobayashi said. "That is another serious problem, and in my understanding, the second problem is preventing the development of legal tech companies."

"The uniqueness of Japan's situation is that government agencies' policy is very strong, and legal tech companies are very hesitant to do anything against the guidance by the administration."  -- Professor Ichiro Kobayashi

The consequences are concrete. The Japan Bar Association has taken disciplinary action against licensed lawyers who serve as executives at legal tech companies--for developing tools that the bar views as constituting the practice of law. The sanction is not disbarment but a public disciplinary notice, published in the official gazette. In Japan's small, reputationally sensitive professional world, that is a meaningful deterrent.

"The CEO and executives of the company are always afraid of being punished," Kobayashi said.

There is a further asymmetry that Kobayashi finds both striking and telling: Japanese regulators apply this scrutiny to Japanese companies, but not to U.S. AI providers. ChatGPT, Claude and Gemini operate in Japan without being subjected to UPL analysis, even though they can and do assist consumers with legal questions. "Japanese regulator does not want to confront the United States," he observed.

Consumers Fill the Gap--Imperfectly

The practical result is that Japanese consumers who need basic legal help have turned to general-purpose AI tools--not because those tools are ideal, but because the regulated alternatives are legally constrained from serving them. "People have no hesitation to use generative AI at all," Kobayashi said. "At the litigation stage, many individual consumers are trying to prepare complaint or answer documents using generative AI."

The quality, he acknowledged, is often poor. Hallucinations are a problem. Consumers who draft documents with AI still have to appear in court themselves. What Japan needs, Kobayashi argued, is something like LegalZoom--a structured, quality-controlled service--and instead has only the unguided use of general-purpose chatbots.

For in-house counsel at large corporations, the picture is different. Kobayashi's research suggests that the UPL constraints effectively create a two-tiered system: sophisticated in-house teams at major corporations are well served by legal tech tools such as CLM that operate at the periphery of legal advice. The constraint falls hardest on small businesses and consumers, who need more substantive legal guidance but are precisely the groups that structured legal AI services cannot reach under the UPL framework.

Okada, advising both corporate clients and the government committee, sees the tension clearly. "Japanese companies tend to be more risk averse than U.S. companies--very, in some cases very, very cautious," he said. "They cannot adopt services unless they are 100 percent sure it is secure." At the same time, he notes, this caution is not permanent: "More and more clients are positive, or they think it's inevitable."

The government committee on which Okada sits is working through exactly these questions--whether existing laws are sufficient, whether soft guidance or hard legislation is the right response, how to position Japan as an AI-friendly society while managing risks. The outcome will shape not just Japanese legal practice but the competitive position of Japanese AI companies in a global market increasingly shaped by U.S. and Chinese technology.

For now, the permission infrastructure holds. Whether it will survive the pace of change is another question entirely.

#390139


Submit your own column for publication to Diana Bosetti


For reprint rights or to order a copy of your photo:

Email Jeremy_Ellis@dailyjournal.com for prices.
Direct dial: 213-229-5424

Send a letter to the editor:

Email: letters@dailyjournal.com