Dive Brief:
- Jesse Furman, a U.S. District Judge for the Southern District of New York, has asked a lawyer to explain why three cases he cites in a motion to end supervised release for former Trump fixer Michael Cohen appear to be made up.
- “As far as the Court can tell, none of these cases [cited by Cohen] exist,” Furman said in a Dec. 12 order to show cause directed at Cohen lawyer David Schwartz of Gerstman Schwartz. He also asked Schwartz to explain any role Cohen had in selecting the cited cases.
- Although Furman didn’t ask about the role of ChatGPT or other use of AI in selecting the cases, the incident appears similar to a case, widely reported earlier this year, in which a lawyer using an AI tool cited half a dozen cases that had no real-world antecedent.
Dive Insight:
Cohen in 2018 was sentenced for campaign-finance violations, tax fraud, and bank fraud as part of his role as a personal attorney for Donald Trump when he was running for president.
After release from prison in 2021, Cohen began a three-year term of supervised release from which he has been seeking early termination. In a third attempt at early release, Cohen’s attorney Schwartz in a Nov. 29 letter cites three 2nd Circuit cases – U.S. v. Figueroa-Florez, U.S. v. Ortiz and U.S. v. Amato – that purport to show causes for early release that support Cohen’s bid. The Nov. 29 letter was reported by Law360.
Furman said Figueroa-Florez and Ortiz are actual cases but the citations are to passages that have no bearing on early release and Amato appears not to be any case on record.
“The Court contacted the Clerk of the Court for the United States Court of Appeals for the Second Circuit, who found no record of any of the three decisions and reported that the one listed docket number (for Ortiz) is not a valid docket number,” Furman said.
The judge is giving Schwartz until Dec. 19 to either provide copies of the cases or explain why he shouldn’t be sanctioned for including the citations in his motion.
He also directed Schwartz to explain the role, if any, “Cohen played in drafting or reviewing the motion before it was filed.”
Earlier sanctions
AI-generated legal citations have been a worry since another Schwartz, Steven Schwartz of Levidow, Levidow & Oberman, in March included half a dozen made-up cases in a filing in a personal injury case, also in the Southern District of New York.
In that case, Schwartz later filed an affidavit saying the ChatGPT tool he used not only provided the legal sources but assured him of the reliability of the citations.
Schwartz and another lawyer at Levidow, Peter LoDuca, were each fined $5,000 by U.S. District Judge Kevin Castel and directed to send letters, along with Castel’s opinion, to the judges to whom the generative AI tool attributed the fake decisions.
In his ruling, Castel said the reliance on a tool exposes problems that can severely damage the judicial process and the view people hold of the court system.
“Many harms flow from the submission of fake opinions,” Castel said. “The opposing party wastes time and money in exposing the deception. The Court’s time is taken from other important endeavors. The client may be deprived of arguments based on authentic judicial precedents. There is potential harm to the reputation of judges and courts whose names are falsely invoked as authors of the bogus opinions and to the reputation of a party attributed with fictional conduct.”
AI-generated cases are among the so-called hallucinations that legal professionals have expressed concern about as they look at using ChatGPT and other tools in their work.
“When you’re looking at generative AI, is it telling you the truth?” Christina Wojcik, managing director of corporate at legal services company LexFusion, has said. “How do you validate it’s actually telling you the truth and you have confidence over the truth it’s telling you?”
Among the steps judges are taking to keep fake citations out of filings, Judge Brantley Starr of the Northern District of Texas issued a standing order in June requiring attorneys appearing in his court to certify that they did not use generative AI to draft any portion of their filings or that any language drafted by such AI-powered technology was checked for accuracy by a human being.