AI on Trial: Nippon Life Takes OpenAI to Court Over Alleged Unauthorized Practice of Law

Maia E. Jerin By Maia E. Jerin

On March 4, 2026, Nippon Life Insurance Company of America filed a lawsuit in the U.S. District Court for the Northern District of Illinois against OpenAI Foundation and OpenAI Group PBC that has quickly drawn the attention of the legal profession. The complaint alleges that OpenAI’s generative AI product, ChatGPT, crossed the line from providing general information into the unlicensed practice of law by giving legal advice and drafting litigation documents for a pro se litigant in active federal court proceedings. In doing so, Nippon claims that ChatGPT induced the breach of a binding settlement agreement, fueled abusive motion practice, and caused significant financial and reputational harm.

According to the complaint, the underlying litigation had been resolved through a settlement agreement that resulted in dismissal with prejudice. After the settlement, the litigant uploaded correspondence from her attorney into ChatGPT and asked the system to evaluate the advice she had received. Nippon alleges that ChatGPT responded by questioning the attorney’s conduct and the legitimacy of the settlement, encouraging the litigant to fire her counsel and pursue further legal action. The AI system allegedly went on to assist with legal research, generate arguments under Federal Rule of Civil Procedure 60(b), and draft motions, subpoenas, and other filings seeking to reopen the closed case and later to pursue related claims in a new lawsuit. Nippon contends that many of these filings served no legitimate legal or procedural purpose and that responding to them cost the company approximately $300,000 in attorneys’ fees. The insurer seeks injunctive relief barring OpenAI from providing legal assistance in Illinois, declaratory relief under the state’s unauthorized practice of law statute, compensatory damages, and $10 million in punitive damages. OpenAI has publicly denied the allegations and has stated that the complaint lacks merit, pointing to its usage policies that caution against relying on ChatGPT for legal advice without a licensed professional involved.

What makes the Nippon case particularly notable is not the concern over AI misuse itself, but the target of the lawsuit. Courts have already dealt with AI-generated legal content, but almost exclusively by sanctioning the human lawyers or litigants who submitted it. In the now well-known case of Mata v. Avianca, a federal court sanctioned attorneys who filed briefs containing fabricated case law generated by ChatGPT, holding that lawyers remain fully responsible for verifying the accuracy of AI-assisted work and cannot shift blame to the technology. Since Mata, courts around the country have issued similar sanctions and warnings, consistently treating AI-generated legal analysis as substantive legal work that demands professional judgment and oversight. Those cases, however, stopped short of imposing liability on the AI developers themselves.

Long before generative AI entered the courtroom, courts also grappled with whether automated legal software could amount to the unauthorized practice of law. In In re Reynoso, the Ninth Circuit held that a bankruptcy software system engaged in unauthorized practice of law by selecting exemptions, explaining legal consequences, and preparing filings tailored to a user’s circumstances, even though the assistance was entirely automated and involved no human advisor. The court emphasized that the absence of person-to-person interaction did not insulate the software provider from UPL liability where the system was effectively exercising legal judgment on behalf of users. That reasoning has been echoed in other cases involving bankruptcy petition preparers and legal “expert systems,” where courts have drawn a line between clerical assistance and tools that provide individualized legal advice.

At the same time, other courts have taken a more restrained approach in cases involving online legal services. Litigation involving LegalZoom and similar platforms often turned on whether the service merely facilitated user-driven document completion or instead provided personalized legal advice, with mixed outcomes and frequent settlements. More recently, in Upsolve, Inc. v. James, a federal court recognized First Amendment limits on enforcing UPL statutes against a nonprofit organization providing constrained legal guidance to low-income debtors. That decision, however, was expressly narrow and grounded in access-to-justice concerns, and it did not involve allegations that the technology encouraged violations of court orders or induced the breach of settlement agreements.

Against that backdrop, the Nippon Life lawsuit pushes familiar doctrine into new territory. It asks whether a general-purpose generative AI system, and the company that develops and deploys it, can itself be deemed to have practiced law without a license by generating tailored legal advice and litigation documents. It also presses traditional tort theories—tortious interference and abuse of process—into the AI context by alleging that automated outputs knowingly encouraged a litigant to breach a settlement and weaponize the judicial process. How the court resolves these issues, particularly at the motion-to-dismiss stage, will likely turn on whether ChatGPT’s alleged conduct is characterized as general legal information or as advice directed at a specific dispute, and whether existing UPL statutes can sensibly be applied to nonhuman actors operating at scale.

For practicing lawyers, the case does not alter existing ethical obligations, which already require supervision, verification, and accountability for any AI-assisted work product. It does, however, signal that questions of AI governance and liability have moved beyond internal firm policies and CLE hypotheticals and into active, high-stakes litigation. Regardless of how the case is ultimately resolved, Nippon Life v. OpenAI underscores that the boundary between permissible legal technology and the regulated practice of law is no longer an abstract concern, but one that courts may soon be required to define with real consequences for developers, lawyers, and litigants alike.

Questions about this article? Contact Maia Jerin.