In the annals of aviation litigation, Roberto Mata v. Avianca isn’t a particularly interesting complaint: Airline beverage trolley injures passenger’s knee during a flight from El Salvador to New York. Passenger files civil suit.
What happened in the course of the litigation, however, has turned Mata into a striking warning to attorneys concerning the use — and misuse — of generative artificial intelligence in their practices.
After Avianca sought to dismiss the case in the Southern District of New York, Mata’s attorney, Steven Schwartz, turned to ChatGPT for assistance in preparing his client’s opposing response.
Unknown to Schwartz, the OpenAI software fabricated a half dozen cases involving airline defendants, which the attorney cited and filed with the court. Defense counsel could not find the cases, nor could District Judge Kevin Castel.
The episode led to $5,000 sanctions from the court for Schwartz and his co-counsel, along with their firm — on top of headlines around the world in 2023 showcasing one of AI’s most-profile professional failures.
The lesson was clear: Lawyers must carefully guard their oversight duties and police generative AI tools, according to a panel discussion with two former federal magistrate judges on lawyers’ ethical responsibilities when employing AI tools.
Mata is “a Cher-level, one-word case that has reshaped how we all use AI,” said Andrew Peck, a senior counsel at DLPA Piper and a retired U.S. magistrate in New York.
Despite the Mata debacle, lawyers have a generally positive viewpoint of incorporating the technology into their practices, said moderator Cat Casey, chief growth officer at Chicago-based Reveal, a legal tech provider that sponsored the Dec. 17 webinar.
She cited research from Thomson Reuters that 60% of attorneys consider themselves optimistic about AI entering the legal profession. In the early days of AI, many lawyers feared “a zombie apocalypse,” from this tech, Casey said. Since then, “there’s been a lot of optimism and a lot of shift in the perception.”
Casey said this trend mirrors a rapid public adoption: ChatGPT took 65 days to amass 100 million users, compared to seven months for Instagram and 10 years for Netflix to reach the same user level.
When using AI tools, lawyers must recall their obligations under Rule 11 of the Federal Rules of Civil Procedure for representations to a court, said Ron Hedges, a retired U.S. magistrate from New Jersey.
The senior-most attorney on a matter carries an ethical responsibility “to supervise everybody and everything working under you,” Peck said. “Just like the senior lawyer would have to take the responsibility for the research the junior lawyer gave to him or her. You can’t tell the judge, ‘I didn’t miscite this case, Your Honor, the AI miscited this case.’”
AI in production and privacy
One area that is likely to see judicial review is the use of AI in document and privilege reviews, privilege logs and document production, Peck said.
“I think lawyers are being very cautious in using GenAI for that process but I fear we are going to start seeing the same sort of issues and motions that we saw with technology-assisted review: ‘It’s a black box, the other side shouldn’t be using TAR because we don’t know what they’re doing and we don’t know how it works,’” Peck said.
“We have to hope that the first judicial decision in this area will be approving of AI,” he said. “One bad opinion, perhaps because of bad lawyering, can set this back for years.”
AI also carries confidentiality concerns that lawyers must consider in order to hew to their professional standards, such as over the use of a public AI platform for even routine queries that may involve client data, Hedges said. That duty also pertains to Rule 1.1 of the American Bar Association’s model rules of professional conduct, which covers competence in the client relationship, he said.
“Take steps to prevent any information being leaked, for lack of a better phrase, into a large language model,” he said.
Lawyers should also inquire with vendors about how models work, how they were trained and what types of bias they may display, according to the panel. And if a matter involves using the client’s large language model, the lawyer should pose those same questions to the client, Hedges said.
Above all, lawyers need to stay on top of how the technology is changing and how it’s evolving in the legal world, the panel said. A certain amount of education and experience will be critical to maintaining one’s professional competence level as AI evolves into new areas of legal practice and judicial findings.
“You don’t want to be like the lawyers in Mata against Avianca and come up with fake citations to the court,” Peck said. “But, conversely, don’t be afraid to experiment behind the scenes. You don’t want to be experimenting when you're in the court or submitting something to the court. But use it — there is no reason to be afraid of gen AI.”