Dive Brief:
- The federal government has sent its strongest message yet that companies that tout their use of artificial intelligence in their business must be able to substantiate the claim or face legal trouble. On March 18, the Securities and Exchange Commission fined two investment advisory firms $400,000 for saying they used AI in their services when they didn’t.
- “We’ve seen time and again that when new technologies come along, they can create buzz,” SEC Chairman Gary Gensler said in a statement. “Those purporting to use those new technologies … should not mislead the public by saying they are using an AI model when they are not.”
- The agency fined Delphia $225,000 and Global Predictions $175,000 as part of settlement agreements the companies entered into.
Dive Insight:
Since 2019, Toronto-based Delphia had been saying on its website and in its marketing material that it made its investment decisions in part by running data from its clients’ social media accounts, banking transactions, and credit card and online purchases through an AI algorithm to give them an “unfair investing advantage” over other investors.
“Delphia’s proprietary algorithms combine the data invested by its members with commercially available data to make predictions across thousands of publicly traded companies up to two years into the future,” the company’s material said.
San Francisco-based Global Predictions operates an interactive online platform it calls PortfolioPilot to make investment allocation recommendations to clients.
In neither of the cases did the company actually use AI in its service, the SEC said.
“Delphia admitted to the Division of Examinations in July 2021 that it had not used any of its clients’ data and had not created an algorithm to use client data,” the agency said in its settlement agreement with the company.
“Global Predictions … inaccurately claimed to be the ‘first regulated AI financial advisor’ on its public website, in emails to current and prospective clients, and on various social media sites,” the SEC said in its agreement with the company, but “could not produce documents to substantiate this claim.”
Eye on AI
Both the SEC and the Federal Trade Commission have been saying for months they’re keeping an eye on how companies depict their use of AI. Last year, Michael Atleson, an attorney who monitors advertising practices for the FTC, used the agency’s blog to warn executives against what’s come to be called AI washing.
“Your performance claims would be deceptive if they lack scientific support or if they apply only to certain types of users or under certain conditions,” he said.
The agency says it has authority under Section 5 of the FTC Act to enforce deceptive claims about AI.
AI law specialist Lena Kemp of LK Law Firm said it’s incumbent on in-house counsel to talk with their IT colleagues to know whether their company’s use of AI allows it to make claims about it.
It’s not enough for technology to use basic automation or rule-based systems to be called AI unless it comes with authentic learning and adaptive capabilities, she said in a Legal Dive column.
“It exaggerates the AI predictive capacities if the product primarily relies on simple statistical analyses or predetermined rules,” she said.
Nor can you say your technology is AI-based just because you used AI in its development.
“If the team claims AI simply because they used AI tools to develop the product or technology, then the product is not AI-powered,” she said.
“Everyone may be talking about AI, but … they should make sure that what they say … is true,” Gensler said in a video he released March 18.