General counsel are playing a two-tiered role as companies explore how to deploy generative AI in their business and support functions, Kevin Clem, chief growth officer at HBR Consulting, says.
Since generative AI exploded on the scene last year as an add-on to existing AI-assisted technology functions, CEOs have been asking their C-suite executives to explore use cases for the technology.
As part of those marching orders, general counsel and chief legal officers are looking for ways to deploy the technology in their legal-team processes.
But they’re also taking a higher-level view as they look from a risk-management perspective at what types of uses, by any business or support function in the organization, are likely to be okay, and which aren’t.
Additionally, legal chiefs are examining what policies and guardrails are needed for the use cases that eventually get deployed.
“There’s an expectation that the GC or the legal team will weigh in on acceptable use and set appropriate guardrails,” Clem told Legal Dive. “They may not get down to the level of actually reviewing use-case by use-case examples, because there are going to be hundreds if not thousands of use cases in an organization. So, the expectation isn’t that every potential use case is explored by the legal team on a substantive level, but that the team will help define the parameters around which generative AI can and should be used from a policy standpoint.”
Support functions
An example of a non-legal use case that companies are looking at, Clem said, is in the HR function.
Rather than have someone on the HR team answer common employee questions, that function would be handed over to a generative AI tool that’s been trained on and draws from the company’s employee policy manual.
One company working with HBR is thinking about training its generative AI tool on its 70-page employee policy manual. Should it follow through and do that, he said, the tool would keep its answers within the parameters of that manual.
“You’re not looking at the totality of the data that’s out in the large language model,” Clem said. “It’s pointing to a source that’s relatively contained.”
Another example is how a company’s communications team might use the tool. Rather than have a public relations person write a first draft of a press release, the tool would be deployed for that purpose and the PR person would come in later to refine it.
Risk watch
These kinds of use cases are what Clem considers examples of relatively safe experiments of the new technology, which trains on and draws from natural language databases to create content that’s expressed in the same way a person would in a document. By creating these initial document drafts, it can free up staff for more value-added work.
Risk is high on the minds of in-house counsel and company leadership after a well-publicized mishap in early June by an attorney who used a generative AI tool to conduct research for a court filing that cited half a dozen made-up cases.
Analysts say the technology will get better as it's further developed. But for now, the best practice is to find relatively safe use cases to experiment with, which means looking for applications in which the tool trains on, and draws from, trusted and self-contained data sources.
“You want to make sure it’s answering correctly,” Clem said. “So, run some experiments within the walls you’re going to control before opening up more broadly.”
Legal function
For the legal department, it can make sense to experiment with using the technology first in non-legal functions, like training attorneys on the use of a new tool or workflow.
If you bring in a new document management or e-discovery system, for example, you can use generative AI to write the first draft of the training manual. That frees up staff time while keeping the experiment within a relatively safe space.
“Someone isn’t sitting down and writing the material from whole cloth or trying to adapt from a prior training document,” Clem said. “So, that’s an example where you’re rolling out new technology, leveraging generative AI to help with a new training curriculum. It’s not going to be perfect, but if you can get a good head start, why not leverage generative AI to help draft the script?”
You can even mix and match AI assisted tools to free up more staff time, he said.
A project HBR is working on is combining a generative AI-drafted training script with an AI-assisted avatar that will conduct the actual training. Should this be deployed as planned, involvement by a staff person will be minimal, because the script will be largely written automatically and the presentation, once it’s refined, will be automated. What’s more, for a global operation, the training can be conducted in whatever language is needed.
“It’s stitching together generative AI with more traditional avatar-based AI training technology that can read a script in whatever language you want it deployed and you’re now deploying a training around a new technology that is entirely based on AI,” he said.
That kind of training can apply to any function within an organization, including legal, which could use it for training on new tools or procedures.
On the legal side, it’s best to stick with trusted, long-validated sources, like Westlaw and LexisNexis, for legal writing. Both Thomson Reuters and LexisNexis are developing tools incorporating the new technology such as Lexis+ AI.
“Your drawing from reputable sources,” Clem said. “These have been used for legal research for decades. If it’s just drawing on the data you know is contained, you’re at less of a risk that the technology is going to create hallucinations or invent content.”
Bottom line, as CEOs ask their executive team to look at possible generative AI use cases, in-house legal teams can expect to be called on to formulate company-wide risk-management-based policy on the use of the new technology, while at the same time, the team will look for possible use cases within its own operation.