Generative AI adoption is picking up steam, but enterprises are not doing enough to mitigate risk.
Just one-third of enterprises are working to curb generative AI-induced security risks and even fewer are addressing inaccuracies, according to a McKinsey survey of 913 respondents whose organizations have adopted AI in at least one use case.
Businesses that choose to steamroll ahead in hopes of gaining a competitive edge are likely to find a host of unintended consequences waiting for them.
“People that have never had AI strategies before are jumping in,” said Bill Wong, principal research director of AI and data analytics at Info-Tech Research Group, in a July live event. “The presidents and the CEOs are asking technology folks, ‘When are we going to get our AI app?’”
Whether generative AI solutions are interacting with customers or internal employees, enterprises have the responsibility to make sure tools deliver on the purpose that’s been communicated to the staff.
Less than one-third of employees feel their employer has been transparent about its use of AI, according to an Asana survey of more than 4,500 knowledge workers in the U.S. and United Kingdom in July.
Even amid the rush to adopt, leaders have to commit to letting ethical frameworks and generative AI policies guide their plans. As a baseline, technology leaders should craft policies and frameworks that cover both the intentions and consequences of any given use case, according to Frank Buytendijk, distinguished VP analyst at Gartner.
“Ethics, most importantly, is contextual, and that means a policy is never complete,” Buytendijk said. “Don’t treat this as a policy, but as a process.”
Define, try, assess
To create ethical guidelines for generative AI use, leaders should start by clearly defining principles and values, such as how the organization believes the technology should work and what it should do, Buytendijk said.
This is the step that Justin Skelton, SVP and CIO at Dine Brands Global, is currently executing at his organization.
“Before we lean into a new technology, we want to have a better understanding of how to use it, [and] what the various use cases would be for the company,” Skelton said. “A lot of it is around the compliance, controls and things that we want to have in place."
Dine Brands Global, which operates Applebee’s, IHOP and Fuzzy’s Taco Shop, is thinking critically about how data would be stored and retained before opening the floodgates for generative AI use.
After establishing the guiding principles, the second step is to operationalize them through use case reviews, according to Buytendijk.
Once organizations set that foundation, technology teams need to monitor for unintended consequences, such as changes in the model’s behavior.
Even over a short period of time, large language model behavior can change substantially. Researchers from Stanford and the University of California, Berkeley found OpenAI’s GPT-3.5 and GPT-4 models became significantly less accurate in some areas.
How teams respond to the deteriorating performance of an AI model can depend on the use case and the severity of the change.
If a retailer observed a model’s behavior slightly declining in a use case where generative AI offered customers suggestions on what to buy, the company wouldn’t necessarily need to take the model down, said Adnan Masood, chief AI architect at UST.
“That’s where the precision doesn’t have to be as high,” Masood said. “Whatever the tolerance the business has for that requirement varies. In healthcare, you cannot have that much of an error measurement.”
Some of the largest companies are entering phases of generative AI adoption, from retailers to consulting firms and commercial real estate companies. Many others are dipping their toes in the water behind the scenes.
Since ChatGPT’s public launch, more than 80% of the Fortune 500 companies have registered ChatGPT accounts, according to OpenAI.
With the amount of money and attention directed toward generative AI, leaders have to ground experiments and adoption plans in ethical guidelines and guardrails. To avoid creating an echo chamber, CIOs can call on other trusted leaders or create an advisory board to overlook experimentation and implementation plans, said Buytendijk.
Enterprises failing to address risks still have the option, for now, to stay the course. But as regulatory pressure continues, industry watchers expect some kind of mandatory requirements ensuring ethical, responsible use of generative AI.
Gartner predicts that by 2025, regulations will necessitate enterprise focus on AI ethics, transparency and privacy, according to research published in July. Buytendijk expects regulations to have two effects. On the one hand, the responsible use of AI is no longer an option, but, at the same time, regulatory compliance is a fairly low level of ethical thinking, according to Buytendijk.
“It’s good if it becomes a mandatory discussion, but it’s challenging because it becomes a more technical thing where people want to get it over with, instead of being intrinsically motivated to do the right thing,” Buytendijk said.