Dive Brief:
- Steven A. Schwartz and Peter LoDuca of Levidow, Levidow & Oberman on Thursday were fined $,5000 for citing made-up cases in a lawsuit that made headlines last month because of their reliance on OpenAI’s ChatGPT chatbot.
- Judge P. Kevin Castel of the U.S. District Court for the Southern District of New York in a 34-page ruling said the lawyers “abandoned their responsibilities” by submitting non-existent judicial opinions created by an AI tool then “continued to stand by the fake opinions after judicial orders called their existence into question.”
- The relatively nominal financial penalty is intended less as a punishment and more as a reminder to others not to do something similar. “The Court concludes that a penalty of $5,000 paid into the Registry of the Court is sufficient but not more than necessary to advance the goals of specific and general deterrence,” Castel said in his sanctions order. The lawyers are also directed to send the Judge’s opinion to the judges to whom the generative AI tool attributed the fake decisions.
Dive Insight:
The incident involving OpenAI’s chatbot took place in a personal injury lawsuit filed by a man named Roberto Mata against Colombian airline Avianca in the Southern District of New York.
Schwartz, one of the plaintiff’s attorneys, wrote in an affidavit that he consulted ChatGPT to supplement legal research he performed when preparing a response to Avianca’s motion to dismiss.
Judge Castel last month wrote in his order regarding the plaintiff’s filing that “six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.”
In a filing he provided to Castel earlier this month, Schwartz and his lawyers admitted he was unfamiliar with the OpenAI chatbot.
In his ruling, Judge Castel said the reliance on a tool exposes problems that can severely damage the judicial process and the view people hold of the court system.
“Many harms flow from the submission of fake opinions,” Castel said. “The opposing party wastes time and money in exposing the deception. The Court’s time is taken from other important endeavors. The client may be deprived of arguments based on authentic judicial precedents. There is potential harm to the reputation of judges and courts whose names are falsely invoked as authors of the bogus opinions and to the reputation of a party attributed with fictional conduct.”
What’s more, Judge Castel said, “it promotes cynicism about the legal profession and the American judicial system. And a future litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity.”
The ruling was handed down Thursday. Look for more coverage on the matter in the coming days.
For background, Legal Dive previously documented how Schwartz used the ChatGPT chatbot for legal research without verifying the information it produced was real. Six cases cited in a subsequent legal filing submitted by Schwartz’s colleague LoDuca were later discovered to be fake.
Also covered were the issues at play beyond the use of generative AI, including that Schwartz and LoDuca’s firm had limited legal research tools available.
Additional coverage looked at how a federal judge and a federal magistrate in other districts responded to the scandal in hopes of limiting similar generative AI-powered technology issues in their courts.
-- Lyle Moran contributed reporting