A prominent law firm has denied using artificial intelligence to generate flawed legal citations after a federal judge identified multiple citation errors in a motion to dismiss. The firm’s formal court filing clarifies that the mistakes were human in origin, offering an important counterpoint in a series of high-profile incidents about the role of generative AI in legal drafting.
In the U.S. District Court for the Northern District of Georgia, defense counsel representing Bank of America in Ringer v. Bank of America, N.A. responded to a judicial order to show cause why sanctions should not be imposed for inaccurate legal citations. In a sworn declaration, the attorney denied that ChatGPT or any other AI-based system was used to prepare the contested filing. Instead, the errors were attributed to improper internal drafting methods and a failure to perform a final citation check.
In November 2025, the court issued an order identifying “multiple inaccurate quotations or citations,” prompting an Order to Show Cause to defense counsel. In response, the attorney acknowledged that several case citations and quotations in the brief were incorrect but stated unequivocally that no generative AI tool was used in the research or drafting of the motion. All research was performed manually using Westlaw and other traditional legal research tools.
The filing detailed how a mixed working document — combining copied excerpts, summaries, and paraphrased notes — was mistakenly integrated into the final brief without adequate verification. The lawyer took full responsibility, describing the oversight as a human error exacerbated by insufficient quality control.
Remedial steps described in the filing included stricter internal protocols for citation verification, secondary review of briefs, and additional training for attorneys — measures meant to prevent similar errors in the future.
This dispute comes as U.S. courts increasingly confront the issue of “hallucinated” citations: non-existent or fabricated authorities that sometimes emerge when generative AI tools are used without rigorous human oversight.
Federal courts and state tribunals have sanctioned lawyers for submitting filings containing AI-generated fake cases. In Colorado, two attorneys were fined under Federal Rule of Civil Procedure 11 for submitting briefs with nearly 30 defective citations generated by AI tools. The court emphasized attorneys must verify all sources, regardless of how they were generated.
In Wyoming, three lawyers from a major plaintiffs’ firm were fined after their motion included eight fabricated case citations produced by AI. The court fined the drafting attorney $3,000 and additional sanctions on supervising counsel for inadequate supervision.
In a separate case in California, a state appellate court fined an attorney $10,000 for filing an appeal with dozens of fake quotations generated by ChatGPT, warning that attorneys must personally verify every citation.
Other federal judges have imposed personal monetary sanctions or removed counsel after finding AI-generated fake legal authorities in filings, reflecting a broader trend of judicial skepticism and enforcement.
Legal citations serve as the backbone of adversarial legal argument. Courts rely on accurate authorities to assess the merits of disputes; submitting inaccurate or misleading citations — whether from human error or AI hallucinations — threatens the integrity of the judicial process. Failure to conduct a thorough cite check can violate professional conduct rules, including obligations of candor and diligence under Model Rule 3.3 and research accuracy under Rule 1.1.
Judges have become explicit in instructing attorneys that AI tools do not absolve them of the duty to verify every source. In some cases, courts have required attorneys to refund fees, pay opposing counsel’s costs, or undergo disciplinary review for failures tied to AI-generated errors.
In the Georgia Ringer case, the court has not yet ruled on whether sanctions will be imposed following the firm’s detailed response and remedial assurances. The dispute highlights an important distinction in the current era of legal technology: not all citation errors stem from AI use, but courts scrutinize all filings for accuracy regardless of origin.
Across the legal profession, firms are updating internal policies — including citation verification requirements, training on AI limitations, and clear guidelines about acceptable AI use — to mitigate risks and preserve professional credibility.
As courts continue to shape norms around AI’s place in legal practice, attorneys must balance efficiency gains from new tools with their foundational ethical obligations to ensure accuracy, integrity, and trust in the judicial system.
Defendant’s Response to Order to Show Cause — Ringer v. Bank of America, N.A., U.S. District Court (Nov. 25, 2025).
Court sanctions attorneys for AI-related false citations under Rule 11. National Law Review
Morgan & Morgan attorneys fined for AI-generated fake cases in Wyoming. Wikipedia
California attorney fined $10,000 for ChatGPT-generated fake citations. Maryland Daily Record
Reuters reporting on sanctions and AI use in legal filings. Reuters+1
Ethics commentary on professional duties concerning AI tools. Forbes
Trends in court enforcement and AI policy updates. Esquire Deposition Solutions
No. In a sworn court filing, defense counsel explicitly stated under penalty of perjury that neither ChatGPT nor any other generative AI or AI-assisted legal research tool was used to research or draft the brief. The attorney attributed the citation errors to human mistakes and flawed drafting practices.
The court raised the issue because inaccurate or non-verbatim citations have increasingly appeared in cases involving improper use of generative AI. Judges nationwide have begun explicitly asking attorneys whether AI tools were used when citation errors resemble known AI “hallucination” patterns.
According to the filing, the errors stemmed from combining paraphrased summaries, copied excerpts, and personal notes in a single working document, followed by a failure to conduct a final cite check before filing. Some paraphrases were mistakenly presented as direct quotations, and some authorities were miscited.
As of the filing date referenced in the article, the court had not yet ruled on whether sanctions would be imposed. The attorney requested that sanctions be denied, citing corrective actions, lack of bad faith, and the absence of AI use.
Did Clearpoint Cross the Line Into Unauthorized Practice of Law in an Arizona Foreclosure Case? A January 2026 court order from the United States.
Could RICO Finally Hold Big Pharma Accountable for Hidden Drug Risks? In a groundbreaking development for pharmaceutical accountability, Wisner Baum LLP has pushed a.
Law Firm Pushes Back on AI Accusations After Court Flags Faulty Citations A prominent law firm has denied using artificial intelligence to generate flawed.
Discover Next
Learn from industry experts about key cases, the business of law, and more insights that shape the future of trial law.