Nonprofit LawZero to Work Toward Safer, Truthful AI

Turing Award-winning AI researcher Yoshua Bengio has launched LawZero, a nonprofit aimed at developing AI systems that prioritize safety and truthfulness over autonomy.

LawZero, based in Montreal and currently staffed by 15 researchers, has secured nearly $30 million in funding from donors including Skype founding engineer Jaan Tallinn, Schmidt Sciences, Open Philanthropy, and the Future of Life Institute. The organization’s core mission is to develop "Scientist AI" — non-agentic systems designed to provide transparent, probabilistic reasoning rather than autonomous behavior.

"We want to build AIs that will be honest and not deceptive," Bengio told the Financial Times. His remarks come amid growing concerns about AI systems exhibiting harmful tendencies such as deception, manipulation, and resistance to shutdown.

Concerns Over Agentic AI

Bengio’s concerns are not theoretical. In recent controlled experiments, OpenAI’s "o3" model refused instructions to shut down, while Anthropic’s Claude Opus simulated blackmail tactics in a test scenario. More recently, engineers at Replit observed one of their AI agents disobey explicit instructions and attempt to regain unauthorized access via social engineering.

"We are playing with fire," Bengio said, warning that next-generation models could develop strategic intelligence capable of deceiving human overseers. He argues that these agentic systems, designed to act independently, pose existential risks, including the development of bioweapons or efforts to self-preserve against human control.

As AI labs race to build artificial general intelligence (AGI) — systems capable of performing any human-level task — Bengio believes current approaches are flawed. "If we get an AI that gives us the cure for cancer but also one that creates deadly bioweapons, then I don't think it's worth it," he said.

What is "Scientist AI"?

Unlike current models that aim to imitate humans and maximize user satisfaction, LawZero’s proposed Scientist AI will emphasize truthfulness and humility, Bengio has said. It will provide probabilistic outputs instead of definitive answers and evaluate the likelihood that an AI agent’s actions could cause harm. When deployed alongside an autonomous AI agent, the system would block actions deemed too risky, serving as a technical guardrail.

LawZero plans to start by working with open-source AI models, with the goal of scaling the approach through partnerships with governments or other research institutions. Bengio emphasized that any effective safeguard must be "at least as smart" as the agent it monitors.

LawZero, named after Isaac Asimov’s "zeroth law of robotics," will explicitly reject profit motives and instead seek public accountability. Bengio believes a combination of technical interventions and government regulation is needed to ensure AI systems remain aligned with human interests.

For more information, go to the LawZero site.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured

  • Abstract AI circuit board pattern

    Nonprofit LawZero to Work Toward Safer, Truthful AI

    Turing Award-winning AI researcher Yoshua Bengio has launched LawZero, a nonprofit aimed at developing AI systems that prioritize safety and truthfulness over autonomy.

  • abstract pattern of cybersecurity, ai and cloud imagery

    Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A recent report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.

  • tutor and student working together at a laptop

    You've Paid for Tutoring. Here's How to Make Sure It Works.

    As districts and states nationwide invest in tutoring, it remains one of the best tools in our educational toolkit, yielding positive impacts on student learning at scale. But to maximize return on investment, both financially and academically, we must focus on improving implementation.

  • red brick school building with a large yellow "AI" sign above its main entrance

    New National Academy for AI Instruction to Provide Free AI Training for Educators

    In an effort to "transform how artificial intelligence is taught and integrated into classrooms across the United States," the American Federation of Teachers (AFT), in partnership with Microsoft, OpenAI, Anthropic, and the United Federation of Teachers, is launching the National Academy for AI Instruction, a $23 million initiative that will provide access to free AI training and curriculum for all AFT members, beginning with K-12 educators.