California AI Regulation Bill Moves to Assembly Vote with Key Amendments

Spearheaded by Senator Scott Wiener (D-San Francisco), California’s "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act" (Senate Bill 1047) has cleared the Assembly Appropriations Committee with some significant amendments. The bill, aimed at establishing rigorous safety standards for large-scale artificial intelligence (AI) systems, is set for a vote on the Assembly floor on Aug. 20 and must pass by Aug. 31 to move forward.

SB 1047 was crafted to regulate the development of advanced AI models by setting clear, actionable safety requirements, as well as regulatory measures. It targets AI models that are especially powerful and expensive to develop, with the goal of balancing innovation with public safety.

The bill sets standards for AI models with significant computational power — specifically, models that utilize 1026 floating-point operations (FLOPS) per second and cost more than $100 million to train. These models are referred to as "frontier" AI systems.

Among other provisions, the bill establishes risk assessment, safety, security, and testing requirements the developer of a covered AI model must fulfill before training the covered model, using the covered model, or making the covered model available for public or commercial use.

It requires, beginning Jan. 1, 2028, the developer of a covered model to annually retain a third-party auditor to perform an independent audit of compliance with the requirements of the bill.

The bill has undergone substantial revisions based on industry feedback, perhaps most notably from Anthropic, a leading AI research organization known for its work in developing advanced AI systems with a focus on safety, alignment, and ethical considerations. The aim of the amendments is to balance innovation and safety, Weiner said in a statement.

"We can advance both innovation and safety; the two are not mutually exclusive," Wiener said. "While the amendments do not reflect 100% of the changes requested by all stakeholders, we've addressed core concerns from industry leaders and made adjustments to accommodate diverse needs, including those of the open source community."

Major Amendments to SB 1047

  • Criminal Penalties for Perjury Removed: The bill now imposes only civil penalties for false statements to authorities, addressing concerns about the potential misuse of criminal penalties.
  • Elimination of the Frontier Model Division (FMD): The proposed new regulatory body has been removed. Enforcement will continue through the Attorney General's office, with some FMD functions transferred to the Government Operations Agency.
  • Adjusted Legal Standards: The standard for developer compliance has shifted from "reasonable assurance" to "reasonable care," a well-established common law standard, including elements like adherence to NIST safety standards.
  • New Threshold for Fine-Tuned Models: Models fine-tuned at a cost of less than $10 million are exempt from the bill's requirements, focusing regulatory burden on larger-scale projects.
  • Narrowed Pre-Harm Enforcement: The Attorney General's authority to seek civil penalties is now restricted to situations where actual harm has occurred or imminent threats to public safety exist.

Support and Criticism

SB 1047 has garnered support from prominent AI researchers, including Geoffrey Hinton and Yoshua Bengio, who emphasize the importance of balancing innovation with safety. Hinton praised the bill for its sensible approach, highlighting the need for legislation that addresses the risks of powerful AI systems.

However, the bill has also faced criticism, particularly from startup founders and industry leaders. Critics argue that the bill's thresholds and liability provisions could stifle innovation and disproportionately burden smaller developers. Anjney Midha, General Partner at Silicon Valley-based VC firm Andreessen Horowitz, criticized the bill's focus on model-level regulations rather than specific misuse or malicious applications. He warned that stringent requirements could drive innovation overseas and hinder the open source community.

"It's hard to [overstate] just how blindsided startups, founders, and the investor community feel about this bill," Midha said during an interview posted on his company's website. "When it comes to policy-making, especially in technology at the frontier, our legislators should be sitting down and soliciting the opinions of their constituents — which in this case, includes startup founders."

"If this passes in California, it will set a precedent for other states and have rippling consequences inside and outside of the USA — essentially a huge butterfly effect in regard to the state of innovation," he added.

In an open letter on their website ("A statement in opposition to California SB 1047"), members of the AI Alliance, which describes itself as "a community of technology creators, developers, and adopters collaborating to advance safe, responsible AI rooted in open innovation," voiced their concerns about SB 1047.

"While SB 1047 is not targeting open source development specifically, it will affect the open-source community dramatically. The bill requires developers of AI models of 1026 FLOPS or similar performance (as determined by undefined benchmarks) to implement a full shutdown control that would halt operation of the model and all derivative models. Once a model is open sourced and subsequently downloaded by a third party, by design developers no longer have control over a model. Before such a "shutdown switch" provision is passed, we need to understand how it can be done in the context of open source; the bill does not answer that question. No models at 1026 FLOPS are openly available today, but technology is rapidly advancing, and the open ecosystem could evolve alongside it. However, this legislation seems intended to freeze open source AI development at the 2024 level."

Legislative Context

The bill's advancements come amid a backdrop of federal inaction on AI regulation. With the US Congress largely stagnant on technology legislation, California's initiative seeks to preemptively address the risks posed by rapidly advancing AI technologies while fostering a supportive environment for innovation.

Governor Gavin Newsom's administration has also been proactive on AI. The Governor issued an Executive Order last September to prepare for AI's impacts, and his office released a report on AI's potential benefits and harms.

SB 1047 represents a significant step in California's regulatory approach to AI, with its outcome poised to influence both national and global AI policy. The Assembly's vote on Aug. 20 will be a critical juncture in shaping the future of AI regulation in the state.

Featured

  •  classroom scene with students gathered around a laptop showing a virtual tour interface

    Discovery Education Announces Spring Lineup of Free Virtual Field Trips

    This Spring, Discovery Education is collaborating with partners such as Warner Bros., DC Comics, National Science Foundation, NBA, and more to present a series of free virtual field trips for K-12 students.

  • glowing padlock shape integrated into a network of interconnected neon-blue lines and digital nodes, set against a soft, blurred geometric background

    3 in 4 Administrators Expect a Security Incident to Impact Their School This Year

    In an annual survey from education identity platform Clever, 74% of administrators admitted that they believe a security incident is likely to impact their school system in the coming year. That's up from 71% who said the same last year.

  • horizontal stack of U.S. dollar bills breaking in half

    ED Abruptly Cancels ESSER Funding Extensions

    The Department of Education has moved to close the door on COVID relief funding for schools, declaring that "extending deadlines for COVID-related grants, which are in fact taxpayer funds, years after the COVID pandemic ended is not consistent with the Department’s priorities and thus not a worthwhile exercise of its discretion."

  • pattern of icons for math and reading, including a pi symbol, calculator, and open book

    HMH Launches Personalized Path Solution

    Adaptive learning company HMH has introduced HMH Personalized Path, a K-8 ELA and math product that combines intervention curriculum, adaptive practice, and assessment for students of all achievement levels.