In-house lawyers have been focused on the copyright side of class action lawsuits related to artificial intelligence since generative AI tools hit the market a few years ago, but their focus will need to shift as unhappy consumers start to drive a new wave of lawsuits, says Warrington Parker of Crowell & Moring.
As consumers learn decisions impacting them are being made by AI tools, they’ll start to seek redress in the courts, the Crowell partner says in a report released Jan. 15 by the law firm.
“A company’s customers may number in the millions and be located across 50 state jurisdictions — and with a bunch of class action law firms ready to go, I expect it to be a wild, wild world out there for companies using AI to serve consumers,” Parker says.
Lawsuits could come on two fronts: selection cases and bad decisions.
“Selection cases are those involving things like hiring decisions and decisions to extend credit and loans,” says Parker. “These could arise any time a company has AI picking one person over another based on a number of variables, where plaintiffs could make a claim on the basis of age, race or gender.”
These lawsuits could be based on AI being biased, by reflecting the biases or inaccuracies of the data it draws on.
Parker recommends companies assess the tools and their training content for bias. “Some of this [assessment need] is already being imposed in some jurisdictions,” he says. “In New York, for example, you have to certify that the AI system you use is not biased.”
Selection-related class actions could also come from companies not understanding what goes into their AI tools, exposing them to liability if they can’t explain the basis for decisions they make.
“The technology … is essentially a ‘black box’ that can more or less train itself [and] typically relies on very complicated calculations, and, once trained, operates with little or no human intervention,” the report says.
On the bad-decision front, companies can be exposed to the extent they allow consumers to interact directly with an AI tool.
“Companies’ increasing reliance on AI to handle customer-facing interactions could … lead to consumer class actions stemming from the technology’s fallibility,” the report says.
“There is an assumption today that AI is rational, reasonable and makes great decisions, but that’s a falsity,” says Parker.
In activities ranging from chatbot responses to product returns, customer refunds and dynamic pricing, the report says, AI could return answers that don’t make sense. When those answers determine things such as who gets a refund or who is charged what with dynamic pricing, it could lead to the unfair and possibly illegal treatment of customers.
“It’s still unclear exactly how AI-related consumer class actions will emerge and evolve,” Parker says. “But companies should “at least be attempting to put their arms around issues with a risk analysis. What are the risks associated with AI? What are the risks we don’t know about — and how can we learn about those?”