Dive Brief:
-
Over 60% of corporate executives responding to a recent KPMG survey said they expect regulatory concerns to put a damper on their generative artificial intelligence technology investment plans.
-
About 40% of respondents reported plans for a three-to-six month pause on generative AI investments to monitor the regulatory landscape, and a quarter said they’ll be holding off either until 2024 or indefinitely until there’s more certainty. Only 35% said they were moving ahead with AI initiatives as scheduled, despite the uncertain regulatory environment.
-
Amid chatter in Washington, business leaders are bracing for AI regulatory action in areas such as data privacy, data security, transparency, and copyright infringement, according to the poll.
Dive Insight:
Corporate leaders are increasingly zeroing in on the potential benefits of generative AI, which refers to technology capable of producing text, images, or other content based on data used to “train” it.
The technology is widely viewed in the business community as a potential game changer that can help drive growth and productivity across the economy. But it’s drawing close scrutiny in Washington over concerns that it poses risks in areas such as data privacy and security.
“This is probably one of the biggest disruptive moments — certainly for knowledge workers — that many of us have ever lived through,” Steve Chase, KPMG’s U.S. consulting leader, said in an interview.
ChatGPT, a generative AI tool created by Microsoft-backed OpenAI, quickly gained fame after its launch last November. It’s known for a wide range of sophisticated capabilities from answering complex questions across a variety of subject areas to writing reports and software code.
KPMG’s survey found that 80% of executives believe generative AI will disrupt their industry and nearly all (93%) think it will provide value to their business.
The research also showed, however, that companies weighing adoption are worried about the risk of regulatory uncertainty, which ranked as the most significant barrier to implementation.
Still, companies planning to pause their AI investments could be taking an even bigger risk, according to Chase.
In late July, the White House announced that leading AI developers, including Microsoft, Google and OpenAI, had committed to adhering to a set of voluntary safeguards for the technology. The companies pledged to invest in cybersecurity and insider threat safeguards, prioritize research on AI’s societal risks and publicly report AI systems’ capabilities and limitations.
“These commitments are real, and they’re concrete,” President Joe Biden said during a White House event at the time. His AI agenda also includes plans for an executive order as well as a push for bipartisan legislative action, according to the White House.
Meanwhile, AI is also getting attention on Capitol Hill. In June, Senate Majority Leader Chuck Schumer, D-NY, unveiled a rough plan for AI legislation, dubbed the SAFE Innovation Framework. He is also expected to host a series of AI forums in the fall.
Generative AI models are trained and fine-tuned using large data sets, which can include personally identifiable data and copyrighted content scrapped from publicly available internet sites, according to a May report from the Congressional Research Service.
“Critics contend that such models rely on privacy-invasive methods for mass data collection, typically without the consent or compensation of the original user, creator, or owner,” the report said. Besides raising privacy issues, such forms of data collection may also trigger questions related to copyright ownership and fair use, it said.
Currently, the U.S. doesn’t have a comprehensive federal privacy law. But it does have a patchwork of federal and state privacy laws and regulations that may address some emerging AI-related concerns, according to Brian Hengesbaugh, chair of Baker & McKenzie LLP’s Global Data Privacy and Security Business Unit.
“From a legal and policy perspective, the rapid rise of this type of transformative innovation places stress on existing legal and privacy frameworks,” Hengesbaugh said in a June blog post published by the International Association of Privacy Professionals. “New interpretations will certainly be needed, but legislators, regulators and policymakers may believe that there is a need to rush to act quickly to adopt entirely new and potentially prescriptive laws, regulations and policies.”
AI also poses significant policy challenges in areas such as intellectual property rights, antitrust, and commercial contracting, he added.
As the White House and members of Congress study policy challenges posed by AI, Federal Trade Commission Chair Lina Khan has vowed to police the market using the agency’s existing legal authorities. The agency is reportedly investigating OpenAI to determine whether the company engaged in unfair or deceptive security practices or risked harm to consumers.
While recent events have put a public spotlight on AI developers, the debate has implications for players across the broader ecosystem, including users of the technology, according to Matthew Miller, cyber security services principal for KPMG US.
It’s incumbent upon users “to protect any content they may be providing to external third parties,” Miller said in a separate interview. “They could be exposed to risk if a security incident were to occur.”
An AI security guide published by KPMG advises companies deploying the technology to take steps such as conducting a comprehensive assessment of applicable regulations that apply to their specific industry, geography and uses cases.
“Inadequate risk and compliance management for AI systems can result in severe consequences, including financial penalties, reputational damage, legal liability, and loss of stakeholder trust,” the guide states.
KPMG surveyed 200 senior U.S. business leaders in June. The respondents represented companies with at least $1 billion in annual revenue across a range of sectors.