Leading legal chiefs say they recognize AI has the potential to rapidly transform their industries, so they are supportive of employees testing out the latest technology for work purposes.
But in order to ensure the exploration of emerging artificial intelligence occurs in a manner that minimizes risk, in-house legal departments are playing a key role in developing AI acceptable use policies.
These corporate AI policies should start by emphasizing basic dos and don’ts, three legal chiefs said during an AI-focused webinar hosted by contract lifecycle management provider Ironclad.
Dana Wagner, chief legal officer at customer engagement platform Twilio, said it’s important to highlight which AI-powered platforms the company has relationships with and are safe for employees to use.
It also helps to make clear there are differences between putting publicly available data into a public AI model that will use it for training purposes versus inserting proprietary data into proprietary models that are not training on the data.
“You want the experimentation; you just want it to happen in a responsible way,” Wagner said.
Alan Tse, global chief legal officer at commercial real estate company JLL, agreed that emphasizing the need to be careful about inputting confidential information into various AI tools is essential.
He also said it is important to remind employees that just like if they were using Google to find content such as a photo, they need to make sure they have the legal authorization to use images or other information produced by AI-powered technology.
This guidance is necessary because generative AI tools such as ChatGPT can quickly produce text, images and video in response to user prompts.
Along those lines, AI policies should require that outputs from AI-powered platforms are reviewed by humans for accuracy, said Eleanor Lacey, general counsel at work management platform Asana.
This approach can help prevent disasters that could harm a business, such as a lawyer citing fake cases produced by a generative AI tool.
Collaboration
During the webinar moderated by Ironclad Chief Community Officer Mary O’Carroll, the legal chiefs also shared that collaboration with other business units is vital when crafting corporate AI policies.
Lacey said Asana’s legal team has worked with the company’s security and engineering units on AI-related policies.
For example, she said bouncing questions off her engineering colleagues about AI has enabled her to learn more about the emerging technology and informed policy drafting.
“I go to the engineers and I say, ‘This is the thing that I want to understand more,’” Lacey said. “And they're like, ‘No, you think you want to understand that, it's actually this other thing that you don't even know exists. And now we're going to educate you on it.’
“I think having that back and forth is incredibly helpful as a lawyer because I'm not also a coder,” Lacey continued. “It's helping me learn and better identify where I think the risks are.”
Wagner said legal departments also benefit from asking executives and leaders of other business units how the company is planning to use AI.
This level of specificity can assist with crafting AI policies and principles that are more concrete and useful, he said.
Policy iteration
The legal chiefs said companies should be prepared to update their policies as AI technology develops further and internal use cases change.
“We’re not trying to get it perfect,” Tse said. “We know that we're going to have to move a little bit as we learn.”
The legal leaders said this approach is a much better option than companies instituting AI, or generative AI, bans.
Tse said employees would still end up experimenting with AI, and they would do so without guardrails in place.
“They are still going to use it, they just won’t tell us how they are using it and we can’t help,” he said.
In that vein, a recent Asana survey of knowledge workers found 48% want more guidance from their employers on how to use AI.