The Federal Trade Commission is investigating OpenAI to determine whether the company engaged in unfair or deceptive data security practices or risked harm to consumers, according to agency documents published by the Washington Post Thursday.
The FTC asked OpenAI to describe in detail the data it has used to train and develop each large language model since mid-2020, how the company obtained the data, the categories of content included in the data and policies and procedures for identifying and vetting data sources. The FTC and OpenAI did not respond to requests for comment prior to publication.
In total, the document includes 49 multipart requests for information related to the company’s retention of personal information, API integrations and plugins, processes for assessing and addressing risks and model development and training. The agency also requested extensive documentation, from organizational charts to contract agreements.
This isn’t the first time OpenAI’s business and technology practices have been under scrutiny, but it is a step up from previous requests for regulatory oversight from advocacy groups, at least in the U.S.
In March, tech professionals called for AI labs to immediately pause the training of AI systems more powerful than GPT-4 to better understand the technology’s risks. Then in May, more than 300 AI scientists and top executives signed an online statement published by The Center for AI Safety which equated the risks of AI to pandemics and nuclear war, calling for global leaders.
The Center for AI and Digital Policy, though, took it a step further by filing a 46-page complaint with the FTC in March. The group escalated its case by filing a supplement to the original complaint Monday outlining international regulatory efforts and “newly surfaced issues not fully captured in the initial complaint.”
In the supplement, CAIDP said the FTC failed to acknowledge its original complaint and that the agency’s silence on OpenAI “raises substantial concerns about the agency’s ability to safeguard the public as new challenges emerge.” The FTC confirmed that it had received the CAIDP complaint to CIO Dive in March but provided no additional comment.
“We welcome the announcement from the Federal Trade Commission,” Marc Rotenberg, executive director for the CAIDP, told CIO Dive in an email Thursday following reports that the FTC launched an investigation. “It is critical that the FTC assess the business practices of OpenAI."
Italy, Canada, France, Australia and Spain have already launched investigations into OpenAI’s data practices. The Italian Supervisory Authority imposed a temporary limitation prohibiting OpenAI from processing Italian users’ data after suspecting the company was breaching the European Union’s General Data Protection Regulation. The ban was lifted after OpenAI gave users more control over data retention.
What CIOs need to know
Amid regulatory efforts, businesses are adopting generative AI technology. Four in 5 employees say their company already has established or implemented policies and strategies around generative AI or are in the process of doing so, according to a June Insight Enterprises survey conducted by The Harris Poll of 405 U.S. employees who serve as director or higher within their company.
Generative AI implementation is not an easy feat for tech leaders. To get it right, CIOs have to evaluate the risks associated with data privacy and security, skills gaps, budgetary constraints and an evolving regulatory landscape.
The FTC’s investigation “is the most major regulatory challenge to [OpenAI] since the Italian Data Authority temporarily banned ChatGPT just a few months ago,” Brian Jackson, principal research director at Info-Tech Research Group, said in an email. “Until the FTC investigation is resolved, it adds to the growing risks for organizations adopting OpenAI’s models and, to some extent, all generative AI models.”
Jackson suggested tech leaders proceed by leaning on responsible AI frameworks, whether businesses choose to build, buy or are currently deploying the technology. Responsible frameworks center around transparency, accountability, security, explainability and trustworthiness.
While the FTC is asking OpenAI about its data-gathering practices, the company has yet to disclose the sources it uses to train large language models, Jackson said. For companies looking to implement the technology, this hampers leaders’ ability to understand whether pirated material was used to train the models.
“CIOs have to take the approach of being transparent about when they are using AI and who owns the accountability for the decisions made with it,” Jackson said. “This includes monitoring the outcomes of the decisions and whether the model is starting to vary from its original intent.”
FTC Chair Lina Khan reminded businesses that even when implementing emerging technologies with little regulatory oversight, leaders must still follow established laws related to fraud or discrimination in June.
“CIOs must be prepared to explain how AI models make predictions as much as possible, the anticipated impact of the predictions, and any potential biases,” Jackson said.