Duane Pozza is FTC regulation practice chair and Nick Peterson is of counsel at Wiley Rein. Views are the authors’ own.
Over the past year, regulators have brought actions against companies for allegedly exaggerating the use or benefits of artificial intelligence in their marketing efforts. Even as the Trump administration promises to pull back on AI-specific regulation, agencies are likely to continue scrutiny of companies making unsupported claims about their use of AI, a practice often known as AI washing. These kinds of enforcement actions often have bipartisan support and can be based on existing laws.
For example, the Federal Trade Commission announced actions against five companies as part of Operation AI Comply in addition to settlements announced near the end of 2024. In early 2024, the Securities and Exchange Commission announced similar actions as part of its AI washing settlements. In fall 2024, the Department of Justice handed down a number of indictments that have an AI-related component. Although the technology is novel, federal agencies that see issues with AI deployment have focused in significant part on traditional allegations like fraud and misrepresentation. To the extent a company is marketing its AI use or capabilities, it should develop a risk mitigation strategy to avoid raising scrutiny about the accuracy of its claims.
Enforcement efforts
In September 2024, the FTC announced five enforcement actions. Three of the actions targeted business opportunity schemes claiming that customers could use AI to earn money by operating online storefronts. All three of the schemes were halted by federal courts and put under the control of a receiver. Another action targeted a company that touted “robot lawyer” services, which were ineffective. That company settled claims for $193,000. All four of these actions had bipartisan support, in contrast to a fifth action against Rytr LLC, a company that sells AI-generated written content, that relied on a more novel legal theory.
In December 2024, the FTC reached a settlement with another company, IntelliVision Technologies, based on claims that it made misleading or unsubstantiated claims about its AI-powered facial recognition software. This too had bipartisan support, including the support of the incoming FTC Chair Andrew Ferguson, who wrote separately to clarify his views on one claim, but noted he supported other deception claims “without reservation.”
The FTC is not alone in its enforcement efforts. In March 2024, the SEC charged two investment advisors, Delphia (USA) Inc. and Global Predictions Inc., for allegedly making false and misleading statements about their purported use of AI. Delphia allegedly made false claims about the use of AI in their services, including in SEC filings, and Global Predictions allegedly had made misleading claims about the use of AI on its website and social media accounts. In June 2024, the SEC targeted the CEO of Joonko Diversity Inc., for allegedly defrauding investors by making material misrepresentations regarding Joonko, including its use of AI technology, which did not exist.
Similarly, the DOJ has pursued actions against individuals connected to companies that allegedly deceived investors. In the fall of 2024, the DOJ announced several indictments against executives at AI-related companies that it alleges lied about financial aspects of their company, including representations about company revenue and other financial information. Additionally, the DOJ is pursuing parallel criminal charges against the CEO of Joonko Diversity Inc.
Focus on fraud
AI continues to rapidly advance and poses the potential to be a transformative technology, though there are few regulations specifically governing AI. There is no comprehensive federal legislation on the horizon, and the Trump Administration is generally against new AI-specific regulation. However, even in the absence of federal AI-specific laws, the enforcement actions brought by agencies over the past year are largely based on traditional fraud and deception theories that do not rely on AI-specific laws and regulations, and often have support regardless of party affiliation. Federal agencies’ focus on fraud and deception provides a framework for investigations and enforcement, while potentially avoiding actions that appear to fill the regulatory gap.
Limiting exposure
By instituting a comprehensive AI risk management strategy that combines proactive monitoring with the ability to react when issues arise, companies can help protect themselves against enforcement and the potential for other disputes arising from their use of AI.
First, as a fundamental step, companies should ensure that AI-related claims are verified and supported by reliable evidence. This step entails documenting any testing and validation to back up AI-related claims. It also means reviewing marketing materials and public-facing statements with a close eye. For example, the FTC expects companies to have a “reasonable basis” for any type of claim made about an AI-related product or service. If a company ever does face scrutiny, having this type of information documented will be essential.
Second, and relatedly, companies should maintain accurate disclosures about their use of AI. This means avoiding the temptation to overstate AI capabilities and being transparent about the limitations of an AI product or service when making claims about it, including through making appropriate disclosures. Failure to be accurate can be a basis for a government investigation and allegations of liability, as in the case of several of the recent FTC actions targeting companies that were making claims about the benefits of AI technology.
Third, companies must manage AI-related risks appropriately, which requires a multi-faceted approach that takes into consideration compliance challenges that, in the case of AI, may be constantly evolving. Companies should take advantage of government resources for compliance and best practices, which agencies like the FTC and DOJ continue to publish and update, in addition to risk management frameworks published by agencies like the U.S. Department of Commerce. Risk mitigation must also take into account the risks of state enforcement and state-specific AI laws, which have grown in number over the past year. Companies must further consider potential civil liability – for example, the risk of litigation from shareholders based on inaccurate public statements, or from civil suits alleging fraud or deception under consumer protection laws. And if a company does business with the government, extra precautions should be taken to avoid risk of False Claims Act suits based on inaccurate claims in bids or proposals submitted to government agencies.
Finally, despite a company’s best efforts, if a problem arises with its use or marketing of AI, it is important to respond quickly to any potential issues. Companies should implement rapid response protocols that ensure they can quickly investigate and remedy any concerns about how AI is being used in their products and services, and communicate any such response to stakeholders, including government agencies and the public. Addressing any actual or perceived issues quickly and thoroughly can go a long way if a company ever finds itself the target of a government agency.
At bottom, AI technology is here to stay, with continued public and government agency interest. Companies should stay ahead of potential risks and take steps to make sure their claims about AI are well-grounded and accurate, even as their use of the technology expands.