Dive Brief:
- As companies incorporate generative AI into their work, they’ll need to consider a range of compliance implications as laws and regulations evolve, executives said at The Wall Street Journal Chief Compliance Officer Summit in early May.
- Generative AI opens up risk management challenges, especially as content-generation capabilities get into the hands of a larger number of users and foundation models come under scrutiny. “It all boils down to, ‘Have I thought about the impacts and am I mitigating the risks, and ultimately, once I've looked at that, is it appropriate at all?’” said Matt Hervey, head of the artificial intelligence law practice in the London office of Gowling WLG. Technical standards are in early stages of development, he said.
- As jurisdictions develop laws and regulations around AI use, companies could look to existing frameworks — such as the National Institute of Standards and Technology AI risk management framework — to guide compliance approaches, said Simon McDougall, chief compliance officer at ZoomInfo.
Dive Insight:
Companies are aggressively investing in AI to improve operations, despite the risks and regulatory uncertainty around AI use, panelists said.
“I have life sciences clients… who are recruiting teams of literally hundreds of engineers to change the way they do research and development, and they are doing so in the knowledge that medical device regulation has not been written yet, that there are unanswered questions about how the court will approach key issues like IP, but they are still investing,” said Hervey.
Jurisdictions are at different stages of adoption when it comes to laws and regulations around AI use: China has extensive regulation in place already; the EU AI Act has been formally adopted; and other jurisdictions, such as the U.K., are relying on individual regulators to take action, he said. At the same time, existing laws already cover cases where a harm occurs because of AI use.
“If harm occurs, and it’s against the law, it already applies,” said Hervey. “If I'm unfairly dismissed, the judge is not going to care about whether AI was used or not.”
Taking cues from existing guidelines
As the legal and regulatory landscape evolves, companies might consider taking cues from existing frameworks, laws and guidelines, including the NIST AI risk management approach, said McDougall.
NIST guidelines are “the best game in town in terms of actually being something which is quite practical,” he said. While not a silver bullet, they're a good place to start as companies seek guidance on building policies and procedures, he said. In addition, the EU AI Act’s list of high-risk activities offers insight for organizations building oversight frameworks for AI activities, he noted.
Challenges with foundation models
Generative AI systems are foundation models that have infinite use cases, but they also introduce risks amid demands for greater transparency, explainability and bias mitigation efforts. Intellectual property challenges could also emerge, panelists said.
“Foundation models could be used for high-risk or could be used for low-risk [activities] … you need your providers to help support you so that you know how they've trained it and what risks you're taking,” said Hervey. “If I'm trained on copyright works, I'm trained on private information – there's no way to take it back out of the model without retraining, and so there are some serious compliance risks you want to get right at the beginning of a project.”
Effective oversight of these systems doesn’t just mean involving the right teams, but ensuring sufficient diversity among them, he noted.
Generative AI also introduces data privacy and security considerations which might prompt calls to update existing laws and regulations, including the possibility of “GDPR 2,” or second round of the EU General Data Protection Regulation, said McDougall.
“In the old days, it was simple — personal data related to an individual,” he said. “Now, where you can use generative AI to produce a whole CV about somebody… is that personal data, is it not? So the old definitions in GDPR are starting to break down.”