Sullivan & Cromwell Apologized for 28 AI-Fabricated Court Citations
One of Wall Street's most elite law firms filed an emergency bankruptcy motion containing 28 fabricated citations, including quotes attributed to the court that never existed. It's the clearest sign yet that AI hallucinations are no longer a rookie mistake.

What happened
If you want the clearest sign that AI hallucinations are no longer a problem for distracted undergraduates and overworked solo practitioners, it arrived last week in a Manhattan bankruptcy court.
Sullivan & Cromwell, one of the most prestigious law firms in the United States, counsel to presidents and Wall Street titans for over 145 years, filed an emergency motion that contained at least 28 fabricated legal citations, including quotations attributed to the court itself that do not exist.
The firm's founder of its restructuring group apologized to the judge. The errors, he explained in writing, were generated by an AI tool, and the firm's own AI-use policies had not been followed.
The episode is, among other things, a quiet piece of public-service journalism. If the people who wrote the “trust nothing and verify everything” AI memo can be fooled, almost anyone can.
What happened: the April 9 filing and the April 18 apology
On April 9, 2026, Sullivan & Cromwell filed an emergency motion in the U.S. Bankruptcy Court for the Southern District of New York, before Chief Judge Martin Glenn. The motion was filed in the Chapter 15 bankruptcy case of Prince Global Holdings Limited, a group of British Virgin Islands entities.
The firm was representing court-appointed liquidators pursuing claims tied to Prince Group and its owner, Chen Zhi, a billionaire whom U.S. authorities have charged with running forced-labor scam compounds in Cambodia that targeted victims worldwide through cryptocurrency investment fraud.
The motion cited legal authorities, the bedrock of any court filing. But according to subsequent filings, those citations were riddled with errors. Opposing counsel at Boies Schiller Flexner LLP , representing Prince Group and Chen Zhi, went to check those citations. Many of them did not exist. Some authorities were mischaracterized. Quotations attributed to the court had no basis in any real opinion. Language attributed to the U.S. Bankruptcy Code could not be located anywhere in the statute. At least one citation referenced a different decision in an entirely different circuit. In total, defendants identified at least 28 erroneous citations across the filing.
On April 18, 2026, Andrew Dietderich, a founder and co-head of Sullivan & Cromwell's restructuring group, filed a formal apology with the court. He acknowledged that artificial intelligence output used in drafting had been included without being independently verified, in violation of the firm's policies.
“We deeply regret that this has occurred,” Dietderich wrote to Judge Glenn. “Regrettably, this review process did not identify the inaccurate citations generated by AI, nor did it identify other errors that appear to have resulted in whole or in part from manual error.”
Dietderich said he also personally telephoned lawyers at Boies Schiller Flexner to thank them for flagging the problems and to apologize directly. The firm withdrew the original motion, filed a corrected version with a redline showing all changes, and told the court it was “evaluating whether further enhancements to its internal training and review processes are warranted.”
The irony: a firm that wrote the rules on AI verification
The most striking detail in this story is that Sullivan & Cromwell was not operating without AI safeguards. The firm had them. It had written them. It had trained its lawyers on them. And those safeguards were ignored.
According to Dietderich's letter to the court, Sullivan & Cromwell maintains “comprehensive policies and training requirements governing the use of generative AI in legal work.” Before any firm lawyer is granted access to generative AI tools, the lawyer must complete two required training modules, with completion tracked and verified.
The training, the firm noted, “repeatedly emphasizes the risk of AI ‘hallucinations,’ including the fabrication of case citations, misinterpretation of authorities, and inaccurate quotations.”
And here is the phrase that should be printed on every AI tool, pasted on every monitor, and read before every query: the training “instructs lawyers to trust nothing and verify everything and makes clear that failure to independently verify AI-generated output constitutes a violation of Firm policy.”
“Notwithstanding these safeguards, the Firm's protocols were not followed here,” Dietderich wrote.
The firm could not, or would not, identify which lawyers had prepared the original motion or who was responsible for the failure to verify. It acknowledged that its internal review also identified other minor errors in other filings, which it attributed to “human error” rather than AI.
Why it matters
The scale of the problem: 330+ hallucination cases and counting
The Sullivan & Cromwell incident is not an isolated embarrassment for one law firm. It is the highest-profile example yet of a systemic problem that has been building for years.
Damien Charlotin, a researcher who maintains a public database of AI hallucination cases in court filings worldwide, has catalogued more than 330 such instances as of April 2026. That is not a handful of outliers. That is a pattern large enough to have its own academic index.
Recent examples include:
- Morgan & Morgan, a massive U.S. plaintiffs' firm, now faces potential sanctions in Wyoming after two of its lawyers cited non-existent cases produced by an AI program in a product liability suit against Walmart involving a hoverboard toy. The firm has since reminded its more than 1,000 lawyers that using fictitious case law in filings can be a firing offense.
- An Australian lawyer forfeited their practice license last year after AI-generated fabricated citations appeared in their filings.
- U.S. courts have sanctioned lawyers in dozens of cases after attorneys used AI for legal research and drafting without fully vetting the results.
The American Bar Association has since reminded members that traditional duties of competence and candor extend fully to AI-generated text, stressing that lawyers must verify any citations and factual assertions before filing. As Suffolk University law dean Andrew Perlman put it: when lawyers submit unchecked AI-generated citations, “that's incompetence, just pure and simple.”
A Thomson Reuters survey last year found that 63 percent of lawyers reported using AI at work, with 12 percent saying they did so regularly, even as experts warned that generative models are prone to inventing facts because they predict plausible-sounding text from large datasets rather than checking source material.
The underlying case: Prince Group and the scam compounds
The legal battle in which the AI-hallucination filing appeared is itself extraordinary, and deeply relevant to AuthentiLens's core mission of exposing fraud and deception.
Chen Zhi, the 37-year-old billionaire founder of Prince Group, has been accused by U.S. authorities of running forced-labor scam compounds in Cambodia and Myanmar. According to the Justice Department, these compounds housed workers who were held against their will and forced to carry out cryptocurrency investment fraud schemes, the same “pig butchering” scams that have cost American victims billions.
The scale of the alleged operation is staggering:
- U.S. authorities seized approximately $15 billion worth of Bitcoin from Chen Zhi in October 2025.
- British authorities sanctioned Chen Zhi in October 2025 and froze his UK assets, including a £12 million mansion on one of London's most expensive streets and a £100 million office building in the City of London.
- Chen Zhi was detained in Cambodia earlier this year and later repatriated to China, where he is expected to face multiple charges.
Prince Group has denied the allegations, calling them baseless.
Sullivan & Cromwell represents court-appointed liquidators from the British Virgin Islands who are pursuing claims against Prince Group and seeking to recover assets for alleged victims. The AI-hallucination filing, the one with 28 fake citations, was submitted in the midst of that asset-recovery push.
Why this matters beyond the legal profession
1. Elite institutions are not immune
The firms that draft AI policies for Fortune 100 companies are themselves making AI-hallucination errors in open court. Sullivan & Cromwell charges clients up to $3,000 per hour for its services. It is not a bottom-feeder operation cutting corners to save money. If Sullivan & Cromwell can file a court document with 28 fabricated citations, no organization is safe from this failure mode. The policies, the training, the mandatory modules, none of it matters if the final step (a human verifying the output against a primary source) is skipped.
2. AI output is designed to look authoritative
Fabricated court citations look exactly like real court citations. Same format. Same confident tone. Same jurisdictional shorthand. Same pin cites. Same “see also” signals. There is nothing in the surface text of an AI-generated citation that says,I am not real.
A senior partner reading a motion at 11 p.m. has no visual cue to work from. Verification is the only check. That is true for legal citations, for medical advice, for news, for social-media posts, for the message your “grandchild” just sent you asking for bail money. The interface does not warn you. You have to check.
3. The apology is a demonstration of AuthentiLens's core mission
Sullivan & Cromwell's public mea culpa is a real-world demonstration of what AuthentiLens's brand is built around: even information that looks formal, credentialed, and correct can be synthetic. The firm's training materials had predicted this exact scenario. The warnings about hallucinations were in writing. The instruction to “trust nothing and verify everything” was explicit. And yet, it still happened.
Courts are now building AI-specific penalty guidance the way they once built guidance for photocopier evidence and Internet hearsay. That same reckoning is reaching ordinary people, one scam message at a time.
How to protect yourself
The same failure mode that produced 28 fake court citations produces fake news articles, fake medical advice, fake investment research, and fake customer support answers. Here is how to defend against it.
- Treat any AI-drafted content as a first draft, not a final answer. Whether you are using ChatGPT, Claude, Gemini, or any other assistant, assume that every quote, citation, statistic, and name could be fabricated. The model does not know what is true. It knows what is plausible. Those are not the same thing. Verify everything against original sources before you act on it.
- Check citations against primary sources. For legal, regulatory, or academic documents, the fastest smell test is running the cited authority through a free service like Google Scholar, CourtListener, or PubMed. Real cases, statutes, and articles return hits. Hallucinations do not. If you cannot find the source in 60 seconds, treat it as fabricated until proven otherwise.
- Verify medical, financial, and safety claims against official sources. AI assistants routinely generate confident answers about medication doses, tax rules, and safety procedures that are partly or entirely invented. Cross-check against the CDC, the IRS, the FDA, manufacturer documentation, or the actual text of a law. If the AI gives you a number or a deadline, find that same number on a .gov or .edu domain before you rely on it.
- Be skeptical of “formal-looking” evidence in scams. Fake subpoenas, fake FTC notices, fake court orders, fake tax bills, fake police emails, all of these are now trivially generatable, often with real-looking case numbers, statute citations, and agency seals. Formality is not proof. A real court document is paired with a real case on a real docket that you can verify at PACER (pacer.uscourts.gov) or the equivalent state court system. A real government notice will survive a call to the agency's published phone number.
- When in doubt, scan it. You are not expected to become an AI-detection expert. Paste suspicious documents, screenshots, messages, or emails into AuthentiLens. Our detection engine flags synthetic content, impersonation patterns, and social-engineering language, regardless of how “official” the source looks.
What comes next
The Sullivan & Cromwell apology will not be the last of its kind. As AI tools become more deeply embedded in professional work , legal, medical, financial, journalistic, academic, the rate of hallucination-related errors will likely increase before it decreases.
The solution is not to ban AI. The solution is to build verification into every workflow. The “trust nothing and verify everything” rule that Sullivan & Cromwell wrote for its lawyers is the same rule that every person should adopt for every AI interaction.
The firm's mistake was not using AI. The mistake was trusting AI output without doing the verification step. That is a mistake anyone can make. And it is a mistake that AuthentiLens exists to help you avoid.
Sources
- Sullivan & Cromwell Apologizes to Judge for AI Hallucinations — Bloomberg Law
- Sullivan & Cromwell apologizes to US bankruptcy judge for AI-generated errors in Prince Group case — Canadian Lawyer Magazine
- Premier Wall Street law firm apologizes for AI 'hallucinations' — The Daily Record / Reuters
- Law firm that charges $3,000 an hour caught using AI — Yahoo Finance
- AI Hallucinations Lead to 28 Fake Citations in Court Filing — AsiaOne
- AI Hallucinations in Filing by a Top Law Firm — Reason / The Volokh Conspiracy
Stay ahead of the next scam
One short briefing per week on the newest scam tactics, deepfakes, and fraud trends, straight from the AuthentiLens editorial desk.
By subscribing, you agree to our Terms and Privacy Policy. Unsubscribe anytime.
