Don’t wait until you hear from employees that they’re using generative AI in their work; be proactive and create a policy governing use of the new technology, attorneys say.
Lawyers at one company initiated a process to write a policy after hearing employees were already using it, Kim Phan of law firm Troutman Pepper said in a webcast hosted by the firm
“The legal folks learned just through water cooler chat that folks were starting to use AI,” she said. “They were fortunate enough to have a company that has the resources to devote to thinking about this.”
Companies are playing catch-up because employees are ahead of them on use of the technology, said Peter Wakiyama, also of Troutman Pepper. “This is probably most companies right now,” he said.
Generative AI differs from more familiar versions of the technology by using information provided in prompts to create, rather than analyze, content. Lawyers have started using it in some cases to write the first draft of a brief or non-legal communication, among other things.
Starting the process
To write a policy governing generative AI use for your organization, there are six steps in-house teams should consider, Phan said.
The first is to let everyone at the organization know you’re planning the policy so once you get going the process won’t be a surprise.
“The first communication needs to be about transparency, leading your employees toward the direction as a top-down approach your company wants to take,” she said.
You’ll want to assemble a team from all of the parts of the organization that will be touched by the technology. It can make sense for legal and HR to take the lead, but also important are IT, R&D, communications, marketing and sales.
The team should come to agreement on the organization’s risk tolerance because that will help determine the depth and breadth of the policy. Technology companies that have been using AI for a while might be expected to be more risk tolerant than others, but nailing this down for your organization is crucial.
“Companies have different appetites for risk, but this is a starting point,” said Wakiyama.
Surveying stakeholders
In the next step, develop a survey to send to employees to learn how they’re already using generative AI, if they are, and under what scenarios they would like to use it and not use it.
“The survey may even differ from department to department,” said Wakiyama. “So, put out your survey – you can use an electronic survey tool for that – and gather the results so you’re making the decision based on actionable information.”
As responses come in, the team can devise scenarios for how the technology should and shouldn’t be used.
“Are those the right use cases for our company?” said Phan. “The critical stakeholders – legal, HR, IT, others – get together and think about that.”
Based on her work with clients, Phan said, she’s seen a range of use cases, especially that apply to coders in engineering. “They’re treating generative AI the same way they might incorporate open source coding data into their systems,” she said.
Coders have been quick to adopt AI because it can help them work faster, said Gerard Lewis, former deputy general counsel at Comcast.
“They’ve used it for rapid prototyping and deployment of new technologies,” said Lewis in the webcast. “It lets developers create quickly and get something stood up in record time to see if an idea or logic flow works.”
Outside of engineering, other use cases that are being tried include first drafts of documents. “They’re using it to produce an initial draft of internal or customer service communications, with some human intervention to tailor it,” Phan said. “And the marketing team is using it for jingles, slogans and other content.”
Policy outline
Once the team has developed, vetted and received approval on a list of use cases, it should create a policy outline that it sends around to get buy-in from leadership and stakeholders.
“The outline is critical,” said Wakiyama. “It's the foundation for the policy. Share it with all the stakeholders and those who need to put the policy out there in the field and answer questions about it.”
When the outline has been given wide exposure, the team drafts the policy, which should include a list of acceptable and prohibited uses. It should also highlight the process employees follow to get exceptions approved.
“Maybe you’re OK with employees exploring, figuring out what makes sense for them,” said Phan. “Maybe you want to have more level of control or you want employees to vet new use cases with you.”
You don’t want the exception process to be too rigid, otherwise you risk employees ignoring it and experimenting with generative AI outside the policy.
“Employees [could] try to circumvent the process and then you have non-compliance,” she said. So you want the process to be smooth — “a documented way for employees to acknowledge the prohibition but still take a clear path to get an exception.”
The policy should also include sections on recordkeeping and use logs, security, privacy, IP rights and new tools.
Iterative process
Setting up a process for recordkeeping and administering a use log can be burdensome but they’re needed for measuring how the organization is using generative AI and whether it’s having an effect on operations.
“Unless this information is collected, it’s difficult for the organization to know if the use of AI is beneficial,” said Wakiyama. “And if someone asks about it — a commercial insurer, for example — if they have that on a questionnaire for commercial insurance, the organization won’t have the information if it’s not being kept on an ongoing basis.”
Once the policy is out, you want to disseminate it the way you would any other policy. That could mean an email blast to all employees or something else.
Training is also important, both to help ensure employee compliance but also as a way to get buy-in.
“We know employees often don’t read policies in full, so training can enhance adoption and compliance,” said Wakiyama.
The last piece is about keeping the policy updated. That could mean going out on a regular basis with the survey to learn how employees are using, or are wanting to use, the new tools. Updating could also include adding new acceptable or prohibited uses, incorporating regulatory compliance as needed and making sure the policy doesn’t conflict with your organization’s other policies.
“It’s an evolving area,” said Wakiyama. “It’s new in terms of case law, regulations. It’s changing daily. We’re seeing new cases, early decisions. The policy [should] evolve along with the AI landscape, and as the organization obtains feedback on how and where it’s being used, you can update accordingly.”