Anthropic Offers Cautious Support for New California AI Regulation Legislation

Anthropic has announced its support for an amended version of the "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act," California's Senate Bill 1047 (SB 1047), because of revisions to the bill the company helped to influence — but not without some reservations.

"In our assessment the new SB 1047 is substantially improved to the point where we believe its benefits likely outweigh its costs," Anthropic CEO Dario Amodei said in a letter to California Governor Gavin Newsom on Aug. 21. "However, we are not certain of this, and there are still some aspects of the bill which seem concerning or ambiguous to us."

California's proposed bill on AI regulation, SB 1047, advanced by State Senator Scott Wiener, a Democrat, mandates safety testing for many of the most advanced AI models that cost more than $100 million to develop or those that require a defined amount of computing power. If the bill passes, developers of AI software operating in the state will need to outline methods for turning off the AI models if they go awry, effectively implementing a kill switch. The bill would also give the state attorney general the power to sue if developers are not compliant.

Senator Wiener recently revised the bill to appease tech companies, relying in part on input from Anthropic, a San Francisco-based AI safety and research company backed by Amazon and Alphabet. The revised bill did away with a provision for a government AI oversight committee. (See "California AI Regulation Bill Advances to Assembly Vote with Key Amendments.")

In his letter, Amodei listed what he sees as the pros and cons of SB 1047. His list of pros included:

"Developing SSPs and being honest with the public about them." The bill mandates the adoption of safety and security protocols (SSPs) similar to those used by top AI developers like Anthropic, Google, and OpenAI. Some companies haven't adopted these measures or have been vague about them, and there are no safeguards against misleading claims. "It is a major improvement, with very little downside, that SB 1047 requires companies to adopt some SSP (whose details are up to them) and to be honest with the public about their SSP-related practices and findings."

"Deterrence of downstream harms through clarifying the standard of care." AI systems are more adaptable than most technologies, and SSP-like measures by companies like Anthropic can reduce misuse risks. SB 1047 ties companies' liability to their SSPs, incentivizing the creation of effective protocols to prevent catastrophic risks. "As a company developing foundational models that also invests heavily in safety, Anthropic thinks it is important to systematize and incentivize this attitude across the industry."

"Pushing forward the science of AI risk reduction." AI safety is an emerging field, with best practices still being developed. While early, strict legislation may be premature, it's crucial to push AI companies to invest in safety science. By requiring Safety and Security Protocols and tying them to liability, the bill encourages companies to address foreseeable risks and develop mitigation strategies before their models become societal risks.

His list of concerns included:

"Some concerning aspects of pre-harm enforcement are preserved in auditing and GovOps." One of Anthropic's original concerns about the bill was the Frontier Model Division's (FMD) prescriptive guidance, reinforced by pre-harm enforcement. The company found it too inflexible for AI's early development stage. The amended SB 1047 eliminates the FMD and narrows pre-harm enforcement, though some powers have shifted to GovOps, which can now set binding requirements for private auditors. The relationship between these entities is complex, with GovOps providing non-binding guidance but influencing mandatory audit conditions.

"It is our best understanding that this interplay will not end up causing unnecessary pre-harm enforcement, but the language has enough ambiguity to raise concerns," Amodei wrote. "If implemented well, this could lead to well-defined standards for auditors and a well-functioning audit ecosystem, but if implemented poorly this could cause the audits to not focus on the core safety aspects of the bill."

"The bill's treatment of injunctive relief." Another place pre-harm enforcement still exists is that the Attorney General retains broad authority to enforce the entire bill via injunctive relief, including before any harm has occurred. This is substantially narrower than previous pre-harm enforcement, but is still a vector for overreach.

"Miscellaneous other issues." The company's list of concerns also included know-your-customer requirements on cloud providers, overly short notice periods for incident reporting, and overly expansive whistleblower protections that are subject to abuse, were not addressed.

"The burdens created by these provisions are likely to be manageable, if the executive branch takes a judicious approach to implementation," Amodei wrote. "If SB 1047 were signed into law, we would urge the government to avoid overreach in these areas in particular, to maintain a laser focus on catastrophic risks, and to resist the temptation to commandeer SB 1047's provisions to accomplish unrelated goals."

Opponents of the bill, which include OpenAI, Meta, Y Combinator, and venture capital firm Andreessen Horowitz, argue that the bill's thresholds and liability provisions could stifle innovation and unfairly burden smaller developers. They criticize the bill for focusing on model-level regulations rather than specific misuse. He warned that strict requirements could drive innovation overseas and harm the open source community.

Anjney Midha, General Partner at Andreessen Horowitz, has expressed concerns that startups, founders, and investors will feel blindsided by the bill and emphasized the need for lawmakers to consult with the tech community.

In an open letter, the AI Alliance, a group focused on safe AI and open innovation, voiced its concerns. The group noted that, although SB 1047 doesn't directly target open-source development, it would significantly impact it. The bill requires developers of AI models with 10^26 FLOPS or more to implement a shutdown control, but it doesn't address how this would work for open source models. Although no such models exist yet, the bill could freeze open source AI development at its 2024 level.

California representatives including Ro Khanna, Anna Eshoo, and Zoe Lofgren have opposed the bill, citing concerns about its impact on the state's economy and innovation ecosystem.

Featured

  • An elementary school teacher and young students interact with floating holographic screens displaying colorful charts and playful data visualizations in a minimalist classroom setting

    New AI Collaborative to Explore Use of Artificial Intelligence to Improve Teaching and Learning

    Education-focused nonprofits Leading Educators and The Learning Accelerator have partnered to launch the School Teams AI Collaborative, a yearlong pilot initiative that will convene school teams, educators, and thought leaders to explore ways that artificial intelligence can enhance instruction.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Supported by OpenAI

    OpenAI, creator of ChatGPT, is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.

  • closeup of laptop and smartphone calendars

    2024 Tech Tactics in Education Conference Agenda Announced

    Registration is free for this fully virtual Sept. 25 event, focused on "Building the Future-Ready Institution" in K-12 and higher education.

  • cloud icon connected to a data network with an alert symbol (a triangle with an exclamation mark) overlaying the cloud

    U.S. Department of Commerce Proposes Reporting Requirements for AI, Cloud Providers

    The United States Department of Commerce is proposing a new reporting requirement for AI developers and cloud providers. This proposed rule from the department's Bureau of Industry and Security (BIS) aims to enhance national security by establishing reporting requirements for the development of advanced AI models and computing clusters.