It’s a good bet the Department of Justice under President Trump won’t care about policing artificial intelligence as much as the Biden administration, but in-house counsel must still ensure they have controls around their company’s AI use because other governments are keeping AI in their sights.
“There's the EU AI Act, the AI Transparency Act in California and a number of other federal and state proposals bubbling up” that try to hold companies accountable for how they’re using AI, says Eric Vandevelde, co-chair of Gibson Dunn’s AI practice group and a former U.S. attorney who oversaw the cyber and IP crimes section in the central district of California. “Companies will have to bake all of that into their enterprise risk management strategies.”
In one of the Biden administration’s last moves at DOJ, the agency in September released updates to its Evaluation of Corporate Compliance Programs policy — the guidelines prosecutors in the criminal division use when evaluating companies’ compliance programs as part of a corporate enforcement effort.
The ECCP update covers several topic areas, but among the most game-changing for in-house teams are the revisions to how prosecutors are to look at a company’s risk management of its AI technology.
Among other things, DOJ will be looking at how the company assesses the impact of its AI on its ability to comply with criminal laws and whether the management of risks related to its use of AI are integrated into broader enterprise risk management strategies.
The agency also wants to see details of the company’s approach to governance regarding its use of AI in its commercial business and compliance program and how much human oversight it has of its AI. It also wants to see how the company is curbing potential negative or unintended consequences resulting from its use of AI.
These changes recognize that the use of AI has become ubiquitous in companies’ core business applications and products and that the new technology poses unique risks, says Deputy Attorney General Lisa Monaco.
“When our prosecutors assess a company’s compliance program — as they do in all corporate resolutions — they consider how well the program mitigates the company’s most significant risks,” former Deputy Attorney General Lisa Monaco said last year at the American Bar Association’s National Institute on White Collar Crime. “For a growing number of businesses, that now includes the risk of misusing AI.”
Legal-engineering collaboration
For companies, the implications are far reaching.
First, in-house counsel will need to work much more closely with IT so they can devise controls that help ensure the company’s AI use passes muster with DOJ’s stepped-up scrutiny, Vandevelde told Legal Dive. “What they need to be doing is getting smart on AI — understanding the basics of the technology and how their company is deploying it and using it, and understanding how the laws may apply to it.”
Vandevelde recommends in-house counsel collaborate with their company’s technical teams so they can see where the company could be exposed to problems.
“They’ll need to work closely with engineers and developers to pressure test AI systems to understand any adverse or unintended consequences that may be flowing from them,” he said.
It’s not clear that former Florida AG Pam Bondi, Trump’s pick for attorney general, will walk back the emphasis placed on AI controls. But DOJ isn’t the only game in town, Vandevelde said; the trajectory of AI governance is similar to the evolution of laws and regulations around cybersecurity and privacy, so legal teams will need to keep a close eye on emerging rules and laws from a variety of sources.
“It’s a very complex undertaking nowadays,” he said, “particularly for multinational companies who operate in multiple jurisdictions.”
Lawmakers in Europe passed the first comprehensive law governing companies’ use of AI, called the EU AI Act, last year in May. The law prohibits outright some AI uses, like those incorporating people’s biometric information like face scans and fingerprints, and allows other, less invasive uses but with controls. And the California AI Transparency Act, which takes effect in 2026, imposes disclosure requirements on companies’ use of AI.
Given these and other AI laws that are already enacted or in the works, companies will need to have controls in place regardless of what DOJ does, Vandevelde said.
“We're already seeing a significant increase in litigation and investigation activity by federal and state regulators into AI systems, particularly when it comes to consumer protection, discrimination issues and whether intellectual property rights are being violated,” he said. “Antitrust is also a major concern, including whether algorithmic pricing enables price fixing and collusion.”