U.S. companies will be impacted by the European Union’s law regulating artificial intelligence but those that are following good AI governance practices should be able to manage the restrictions without too much trouble, privacy and technology legal specialists say.
The EU Artificial Intelligence Act, enacted by an overwhelming 523-46 vote on March 13, represents the world’s first law to break down AI uses based on the degree of risk involved. Uses with the highest risk are banned altogether while those with lower-level risks are allowed with limits.
Among uses facing an absolute ban are biometric identification systems to infer private information such as race, religion, sexual orientation and political beliefs; social scoring to screen out individuals for jobs; systems that try to recognize employee and student emotions; and tools that try to manipulate people’s behavior or exploit their vulnerabilities.
Tools that aren’t banned but classified as high risk include those that play a role in selecting people for jobs or are used in workplace monitoring.
Most current uses of AI are considered to be minimally risky and would fall under existing privacy and security rules. “The vast majority of AI systems currently used or likely to be used in the EU fall into this category,” an EU fact sheet says.
The law also calls for particular scrutiny of AI systems used in critical infrastructure (such as transportation), financial services, and health care, as well as educational or vocational training. Companies using the technology in these areas must take steps to assess and reduce risks; maintain use logs; meet transparency and accuracy standards; and ensure human oversight. Companies also must give individuals a way to submit complaints about such AI systems and receive explanations about decisions based on high-risk AI uses that affect their rights.
Penalties for violations are steep: up to 7% of a company’s global revenue or 35 million euros (about $38 million), whichever is higher.
U.S. impact
While the United States has yet to adopt anything like the EU AI Act, U.S.-based companies that ignore the measure would be doing so at their peril.
“One is hard pressed to find a publicly traded U.S. company that is not impacted by Europe,” said Los Angeles cybersecurity attorney Dominique Shelton Leipzig, who leads the global data innovation team at Mayer Brown.
The law regulates any AI-enabled enterprise doing business in the EU or companies directing services in any way towards EU citizens. As a result, it affects businesses in far more than the EU’s 27 member countries.
The extraterritorial impact is similar to the EU’s General Data Protection Regulation that took effect in 2018 as the world’s strongest data privacy and security law.
“The analogy to privacy and the GDPR is very strong,” says Seyfarth Shaw attorney Rachel See, who previously served as an EEOC Commissioner’s senior counsel for AI and algorithmic bias. “The way we think about privacy shifted the conversation. The EU AI Act is poised to do the same thing with the way we think about risk.”
Compliance risk
Companies that are following good AI governance practices need not panic, says Ashley Casovan, managing director of the International Association of Privacy Professionals AI Governance Center.
“I don’t think that using AI where and when appropriate should change based on legislation in the EU or anywhere else,” Casovan told Legal Dive.
“You have to rethink what the benefit of the technology use is versus the risk,” said Rachel See.
Shelton Leipzig voiced a similar sentiment in likening the EU AI Act to a traffic light at an intersection, with prohibited uses of AI having to stop but plenty of advantageous uses still getting the green light.
Hiring is a prime example of a high risk activity, said Shelton Leipzig. That makes it especially important for companies to ensure they have the correct data going into algorithms and the correct data coming out.
“You need continuous testing and monitoring where there is the most potential harm to ensure that things like bias are not happening,” she said. “That means new code may need to be dropped into an algorithm if the results are antithetical to the company’s fairness goals.”
Given the intersection of technology and fairness, compliance becomes a team sport; it can’t fall solely on the IT team. “You need a cross-disciplinary team,” she said. “Reach out to subject-matter experts. Keep a record of everything, and don’t allow your tests to fall into the ether.”
Shelton Leipzig explained that models can and do shift, so it is crucial for humans to be involved to bring the systems back on track.
Next steps
Most of the EU AI Act will not take effect until 2026, though the prohibitions on unacceptable-risk AI systems could apply as early as November 2024. But corporate leaders need to prepare now and familiarize themselves with prohibited and high-risk AI.
Questions still remain over how the EU AI Act will be enforced. “AI regulators have a lot on their plate,” said See.
That makes it key for compliance teams to stay on top of details as regulators flesh out concepts that are unfamiliar at this early stage but will need to be incorporated into controls.
“It’s important to harmonize on the concepts to make it easier for people developing these systems to comply,” Casovan said.