California AI Watermarking Bill Supported by OpenAI

OpenAI, creator of ChatGPT, is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark."

OpenAI, creator of ChatGPT, is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." Microsoft, Adobe, and other tech companies have also expressed their support.

The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.

Watermarking is a technique used to embed additional information into images, audio, video, and documents, often invisibly, to establish their provenance and authenticity

In a letter sent to California State Assembly member Buffy Wicks, who authored the bill, OpenAI Chief Strategy Officer Jason Kwon emphasized the importance of transparency in AI content, especially during election years. "New technology and standards can help people understand the origin of content they find online, and avoid confusion between human-generated and photorealistic AI-generated content," Kwon wrote. (The letter was reviewed by Reuters.)

This bill has been overshadowed by another California state bill, SB 1047, which aims to require that AI developers conduct safety testing on some of their own models. That bill has faced a backlash from the tech industry, including Microsoft-backed OpenAI. (More information here.)

California state lawmakers introduced 65 bills addressing artificial intelligence during this legislative session, according to the state's legislative database. These proposed measures include ensuring algorithmic decisions are unbiased and protecting the intellectual property of deceased individuals from AI exploitation. However, many of these bills have already stalled.

San Francisco-based OpenAI has emphasized the importance of transparency and provenance requirements, such as watermarking, for AI-generated content, particularly in an election year.

With elections taking place in countries representing a third of the world's population this year, experts are increasingly concerned about the impact of AI-generated content, which has already played a significant role in some elections, including in Indonesia.

"New technology and standards can help people understand the origin of content they find online and avoid confusion between human-generated and photorealistic AI-generated content," Kwon wrote in his letter.

AB 3211 passed the state Assembly with a unanimous 62-0 vote and recently cleared the senate appropriations committee, setting it up for a full Senate vote. If approved by Aug. 31, the bill will move to Governor Gavin Newsom for signing or veto by Sept. 30.

The full text of the bill is available here.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured

  • robot typing on a computer

    Microsoft Unveils 'Computer Use' Automation in Copilot Studio

    Microsoft has announced a new AI-powered feature called "computer use" for its Copilot Studio platform that allows agents to directly interact with Web sites and desktop applications using simulated mouse clicks, menu selections and text inputs.

  • AI microchip under cybersecurity attack, surrounded by symbols of threats like a skull, spider, lock, and warning shield

    Report Finds Agentic AI Protocol Vulnerable to Cyber Attacks

    A new report from Backslash Security has identified significant security vulnerabilities in the Model Context Protocol (MCP), technology introduced by Anthropic in November 2024 to facilitate communication between AI agents and external tools.

  • educators seated at a table with a laptop and tablet, against a backdrop of muted geometric shapes

    HMH Forms Educator Council to Inform AI Tool Development

    Adaptive learning company HMH has established an AI Educator Council that brings together teachers, instructional coaches and leaders from school district across the country to help shape its AI solutions.

  • illustration of a human head with a glowing neural network in the brain, connected to tech icons on a cool blue-gray background

    Meta Introduces Stand-Alone AI App

    Meta Platforms has launched a stand-alone artificial intelligence app built on its proprietary Llama 4 model, intensifying the competitive race in generative AI alongside OpenAI, Google, Anthropic, and xAI.