Dive Brief:
- Colorado’s first-in-the-nation AI consumer protection law creates a complex compliance regime for companies that develop or use what’s being called high-risk AI systems, but it also applies a softer prosecution touch to those that make an effort to keep their systems from unintentionally harming people.
- “I don’t know what kinds of businesses you should have if you aren’t willing to be accountable for [these] systems,” Robert Rodriguez, chief sponsor of the law in the state senate, said when the bill was being debated earlier this month.
- The law is intended to press companies to develop and deploy systems that make automated decisions – in applications for jobs, mortgages, credit, insurance and other “high risk” matters – without unintentionally discriminating against people. The law gives companies two years to put in place a compliance process before it takes effect in early 2026.
Dive Insight:
In signing the law May 17, Gov. Jared Polis said he thinks the law does a pretty good job holding companies to the right level of accountability but there remains a risk Colorado could hurt its standing as a technology hub if compliance turns out to be too heavy-handed for what lawmakers are trying to accomplish.
“While the guardrails, long timeline for implementation and limitations contained in the final version are adequate for me to sign this legislation,” he said in a statement, “I am concerned about the impact this law may have on an industry that’s fueling critical technological advances across our state.”
It would be better if Congress stepped in, he said, because a nationwide approach is the best way to protect people against AI risks. Absent that, he asks Colorado lawmakers to revisit compliance as they learn more.
“Should the federal government not preempt this with a needed cohesive federal approach, I encourage the general assembly to work closely with stakeholders to craft future legislation for my signature that will amend this bill to conform with evidence-based findings and recommendations for the regulation of this industry,” he said.
The federal government isn’t likely to step in any time soon. The Senate has issued an AI roadmap and the White House has released a set of guidelines developed by the Department of Labor, but neither of these policy documents goes into the kind of detail that can form the basis of legislation in the near term.
Risk assessment
The law imposes compliance requirements for companies that develop AI as well as those that use it.
Developers are to conduct impact assessments and share them with companies that use their technology. The assessments are meant to cover the data that goes into the algorithm and the decisions that come out of the application, the metrics that are used to assess performance and a description of how the company is being transparent about all of this. If the developer knows of any risks in its tools, that must be disclosed along with what’s being done to improve compliance, among other things.
For users of the tools, what the law calls deployers, the requirements are mainly about disclosure. Users are to let people know they’re using an AI system, how it’s being used and to let people opt out. If someone doesn’t like the decision that the AI makes – if a job or loan applicant gets turned down, for example – the person must have a chance to appeal. More broadly, users are to post a public statement about their use of the systems and to share what they’re doing to manage discrimination risk, what data goes into the system and other disclosure matters of that nature.
The law references the AI risk management practices developed by the National Institute of Standards and Technology and says that, if a company adheres to those or some other practices that get added into the regulations, it can benefit from the rebuttable presumption that it’s tried to take reasonable care to protect people. And if the company discovers a tool is making discriminatory decisions despite all of that, it benefits from an affirmative defense against prosecution – there’s no private right of action – as long as it discloses the problem and tries to improve its compliance.
“This combination of a rebuttable presumption and an affirmative defense creates strong incentives for employers to prioritize AI risk management and to proactively identify and address potential issues,” Rachel See and Annette Tyman of Seyfarth Shaw say in their analysis of the law.
Tyler Thompson of Greenberg Traurig recommends in his analysis of the law that companies determine whether their risk management program is adequate to comply. “Any company wishing to benefit from the affirmative defense provided by [the law] will need to have a satisfactory compliance program in place,” he says.