Dive Brief:
- California Gov. Gavin Newsom vetoed a landmark bill that would have regulated large-scale artificial intelligence models in the state and likely served as a blueprint for other states considering how to oversee a technology that is predicted to alter vast swaths of the U.S. economy. The bill would have covered AI models that cost more than $100 million to develop or those that employ large computing resources to train.
- Newsom wrote Sunday in his veto response that California has a role to play in regulating AI, with 32 of the 50 largest AI companies based there, but he found the measure’s focus on the largest and most well-capitalized AI models to be flawed. “A California-only approach may well be warranted — especially absent federal action by Congress — but it must be based on empirical evidence and science,” Newsom wrote.
- The bill’s primary sponsor, Sen. Scott Wiener, D-San Francisco, said in a social media post Sunday that the veto means companies creating a powerful technology “face no binding restrictions from U.S. policymakers, particularly given Congress's continuing paralysis around regulating the tech industry in any meaningful way.”
Dive Insight:
The bill’s focus on large, costly models “could give the public a false sense of security about controlling this fast-moving technology,” Newsom wrote. “Smaller, specialized models may emerge as equally or even more dangerous” than the ones covered by the legislation.
The governor also criticized the fact that the legislation did not consider where an AI system would have been used and whether it involves critical decision-making or sensitive data.
The European Union has enacted the most stringent AI regulation thus far, with the EU AI Act, which took effect in July. To date, most other major economies, including Australia, India and the U.S., have issued policy guidelines and voluntary frameworks to try to shape AI development, stopping short of laws.
Colorado has passed a state law, in effect in February 2026, that requires a developer of a “high-risk” AI system to exercise reasonable care to protect consumers from “any known or reasonably foreseeable risks of algorithmic discrimination.” The state AG can issue civil penalties and seek injunctions.
The California legislation would have imposed new liability for tech companies and mandated new testing, compliance and reporting protocols for AI developers. The state would have been able to impose fines equal to as much as 10% of the cost of the computing power used to train a model covered under the law, and up to 30% for subsequent violations.
The legislation would also have required companies to have the capability to shut down the AI quickly, if necessary, and imposed an annual third-party audit requirement starting in 2026 to ensure compliance with the state law.
The bill also would have authorized the state Attorney General to bring civil charges and impose whistleblower protections for those disclosing information about compliance with the law or if an AI model poses “an unreasonable risk of critical harm.”
Such harm would have included the creation or use of a chemical, biological, radiological or nuclear weapon that results in mass casualties or at least $500 million damage from cyber attacks on critical infrastructure enabled by an AI model.
The legislation had split some in the technology community, with Grok AI founder Elon Musk endorsing the bill, while Google, Meta, Open AI and other tech companies opposed it. Andreessen-Horowitz, the prominent Silicon Valley venture capital firm, also battled the legislation.
“For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk to the public,” Musk wrote in August on his X social media platform.
Arun Subramaniyan, the founder and chief executive of generative AI startup Articul8, and a former Intel executive, said that AI companies did not want “to take responsibility for end-use of their models where they don't have control of the end user's application.” That could, in turn, hurt open-source AI development and lead to more closed systems, he wrote Monday in an email to Legal Dive.
“I think there was broad support for regulation in the industry, just not one sided or without [a] balanced approach that safeguards the consumers’ interests without stifling innovation,” Subramaniyan wrote.
The California effort highlights the ongoing debate around AI responsibility, “whether AI laws should regulate the models and the developers or the applications and uses of the AI technology,” law firm Pillsbury wrote in a Sept. 26 analysis of the bill.
“Whether you are an ‘AI doomer,’ believing that AI presents an existential threat to humanity, or you are an ‘effective accelerationist’ or ‘e/accs’ and believe that AI can bring a utopian future, it is clear AI regulations are here and now, perhaps trying to address not only the demonstrable risks but also to anticipate the hypothetical risks,” five firm lawyers and advisors wrote.
Newsom has signed several other AI bills into law, including ones to criminalize blackmail using AI-created nude images, one to require contracts with actors before a “digital replica” of their voice or image is created and one to require disclosure of AI-generated political advertisements.
While the EU and Colorado measures focus on the “mechanics” of AI platform development, “California has been focused on the consumer-protection aspect” of AI, said Karen Shin, a lawyer in the privacy, security and data protection group at Blank Rome LLP.
“That trend will continue in terms of dividing and distinguishing responsibility” between the technology’s creators and its users, she said. “Just because you are using an AI system I don’t think that will mean you are not liable.”