When generative AI first captured headlines, the legal world held its breath in anticipation of how the technology would disrupt traditional practice. Early attempts to integrate AI in law were often cautionary tales — like the few solo practitioners who were caught using AI to cut corners, filing briefs that contained hallucinated citations. It was easy to dismiss these cases as unfortunate one-offs.
But over the past few months, the landscape has shifted. Several prominent law firms — and even some in-house legal teams — have faced sanctions for relying on generative AI without adequate safeguards. As a result, courts are now establishing new guidelines on when and how legal professionals may use AI tools in their work.
In this environment, forward-thinking legal teams need not only to keep pace with regulations but also to ensure that their workflows and technology partners are equipped for the new realities of AI-enabled practice.
AI in Big Law
Although legal practitioners who use AI are generally careful with the technology, there have been a few high-profile incidents this spring in which legal teams filed briefs that contained hallucinations. At least a half dozen Big Law firms have been sanctioned or reprimanded for the improper use of AI this year. These are a few prominent cases.
Lacey v. State Farm Gen. Ins. Co., No. cv-24-05205 FMO (MAAx) (C.D. Cal. May 6, 2025): Attorneys from K&L Gates and Ellis George were fined $31,100 for submitting briefs filled with false, misleading, and inaccurate legal citations and quotations. Ellis George attorneys had relied on AI tools like CoCounsel, Westlaw Precision, and Google Gemini for legal research. K&L Gates, as co-counsel, failed to verify the citations before filing. The court found that roughly one-third of the citations were either nonexistent or incorrect.
Johnson v. Dunn, No. 2:21-CV-01701-AMM (N.D. Ala. May 19, 2025): Attorneys at Butler Snow submitted a brief created with ChatGPT that included false case citations and misinterpretations of actual cases, despite the firm’s policy requiring attorneys to review AI-generated content. U.S. District Judge Anna Manasco is weighing possible sanctions, such as fines, mandatory training, referrals to licensing boards, or temporary suspensions.
P.R. Soccer League NFP Corp. v. Federación Puertorriqueña de Futbol, No. 3:23-cv-01203-RAM-MDM (D.P.R. Apr. 10, 2025): A federal judge awarded over $50,000 in attorney’s fees to Paul Weiss after the opposing side filed motions containing fake legal citations and fabricated quotations that misrepresented or contradicted the law.
Evolving Guidelines on AI
In light of these cases and others, courts are creating new guidelines for legal practitioners looking to use AI in their work. About 2% of more than 1,600 U.S. district and magistrate judges have issued 23 standing orders on AI as of February, according to a Law360 Pulse tracker.
Some have banned the use of AI outright. In the U.S. District Court for the Western District of North Carolina, judges issued a standing order requiring lawyers to certify that they did not use AI to prepare their briefs.
However, bans are still uncommon. Most orders permit legal teams and self-represented litigants to use AI to prepare their court filing as long as they disclose that they used the technology and attest to the accuracy of the AI-generated content. For example, Hawaii U.S. District Judge Leslie E. Kobayashi directed any party who uses GenAI to let the court know that they used AI and to identify which AI tool they used.
Yet other courts have refused to weigh in. The Fifth Circuit Court of appeals decided not to adopt a special rule on the use of AI. The court instead reminded practitioners of their responsibility to ensure that any information they submit to the court is accurate.
Legal scholars and practitioners are divided on guidelines that limit or ban AI. In interviews with Law360 Pulse, several scholars mentioned they were concerned that competing guidelines would sow confusion among practitioners. Others worry that even guidelines—not bans—will deter legal teams from using what could otherwise be a useful tool.
California U.S. District Judge Araceli Martínez-Olguín issued a standing order requiring lead trial counsel to personally verify the accuracy of AI-generated content. However, in big litigation, the lead trial attorney is not typically the person who checks case citations, so this type of order will no doubt have a chilling effect on the use of AI for court submissions.
The landscape is changing quickly. In the months and years ahead, courts will continue to respond to the evolving role of AI in litigation. Like all of us, they will be watching closely to see how this technology reshapes the ethical and legal topography of case law.
The landscape is changing quickly. In the months and years ahead, courts will continue to respond to the evolving role of AI in litigation. Like all of us, they will be watching closely to see how this technology reshapes the ethical and procedural landscape of legal practice. As generative AI continues to take root, we can expect a steady stream of new court guidelines and best practices, making it essential for law firms and legal departments to stay informed and prepared for what’s next.