General counsel are the appropriate C-suite executive to help ensure the board knows how the company is using artificial intelligence and is exercising oversight of it, Troutman Pepper Counsel Shannon VanVleet Patterson said in a podcast.
AI use tends to come from the bottom up, with tech or other staff adopting it as part of their function. Staff experimentation can help the company learn how to use the technology effectively and what business opportunities might flow out of its use, but employee use also creates risk that needs to stay within the company’s broader strategy, and that’s a function of the board, she said.
“You have to have your arms around what's already there in the company,” said Patterson, a corporate governance specialist.
Patterson suggested boards, with their fiduciary duties of care and loyalty, approach AI the same way they oversee data risks and environmental, social and governance issues – that is, as part of the company’s strategic plan.
“These [are] business elements that can impact the long-term growth and value of a company,” she said.
For GCs, AI risk means putting in place a process to ensure the board stays on top of how the technology is being used and ensuring the board has the right mix of expertise to exercise its oversight.
Governance structure
GCs can build oversight around an existing committee, probably one that’s already involved in tech issues, by making AI issues part of its regular agenda.
That doesn't mean AI oversight should be limited to board members with tech backgrounds, she said; it’s the judgment that’s crucial.
For non-tech directors, an appropriate role for the GC is to work with IT to create a way to talk about AI in non-technical terms so specialized jargon doesn’t get in the way of strategy.
“You want your directors to be comfortable in discussing these terms and understanding what they're hearing when they hear from technical specialists,” she said.
Transparency
Once you have a process for the board to consider AI issues and the right mix of expertise, you want to document what you know about the risks of AI and the thinking behind the board’s decisions regarding the guardrails it’s implementing.
Transparency in the process is crucial, because there’s little guidance on these issues from the federal government, nor is there likely to be guidance soon. So it’s up to companies to create their own guidance and to be prepared to show why they’re governing AI the way they are.
“Companies are really operating without a uniform, legal or regulatory framework,” she said.
“And I think that will persist for the foreseeable future,” Chris Willis, a Troutman Pepper partner, said on the podcast.
Once your guardrails are created, you want a process for employees to feel comfortable coming forward if they believe they see instances in which AI is being used in ways that go outside the guardrails.
If employees don’t have that sense of comfort, they’re more likely to go to a regulator first, putting the company on the defensive.
“Regulators are out there begging for whistleblowers to come forward,” said Willis. It's better to “encourage that conversation to occur internally within the company.”
An engaged board that is watching the company’s AI use closely, as part of its fiduciary responsibility, is the foundation for employees to feel comfortable coming forward, the attorneys said.