Common Sense Media to Collaborate with OpenAI on AI Guidelines and Education

Common Sense Media, a nonprofit that provides tools, reviews, and other resources to help vet technology and media for children, has announced a partnership with OpenAI focused on teenagers' use of artificial intelligence. The goal: "to help realize the full potential of AI for teens and families and minimize the risks," according to a news announcement.

The organizations plan to collaborate on AI guidelines and educational materials for parents, educators, and young people, the announcement said. In addition, they will put together "family-friendly" GPTs in OpenAI's GPT Store based on Common Sense ratings and standards.

Common Sense's AI ratings assess how well a product aligns with eight AI principles:  

  • People first: Respecting human rights children's rights, identity, integrity, and human dignity, and maintaining human decision-making.
  • Fairness: Inclusion by design, with active evaluation of blind spots, hidden assumptions, and unfair biases in data.
  • Trust: Upholding high standards of scientific excellence and rigor.
  • Kids' safety: Prioritizing the protection of children's safety, health, and well-being.
  • Learning: Providing high-quality content that allows all students to participate fully in the learning.
  • Social connection: Supporting meaningful human contact and connection.
  • Privacy: Clear policies and procedures for protecting sensitive data.
  • Transparency and accountability: Providing mechanisms for meaningful human control and human agency in decision-making.

"Together, Common Sense and OpenAI will work to make sure that AI has a positive impact on all teens and families," said James P. Steyer, founder and CEO of Common Sense Media, in a statement. "Our guides and curation will be designed to educate families and educators about safe, responsible use of ChatGPT, so that we can collectively avoid any unintended consequences of this emerging technology."

More information on Common Sense's approach to AI is available at commonsensemedia.org/ai.

About the Author

Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].

Featured

  • AI-powered individual working calmly on one side and a burnt-out person slumped over a laptop on the other

    AI's Productivity Gains Come at a Cost

    A recent academic study found that as companies adopt AI tools, they're not just streamlining workflows — they're piling on new demands. Researchers determined that "AI technostress" is driving burnout and disrupting personal lives, even as organizations hail productivity gains.

  • AI microchip under cybersecurity attack, surrounded by symbols of threats like a skull, spider, lock, and warning shield

    Report Finds Agentic AI Protocol Vulnerable to Cyber Attacks

    A new report from Backslash Security has identified significant security vulnerabilities in the Model Context Protocol (MCP), technology introduced by Anthropic in November 2024 to facilitate communication between AI agents and external tools.

  • laptop displaying a red padlock icon sits on a wooden desk with a digital network interface background

    Reports Point to Domain Controllers as Prime Ransomware Targets

    A recent report from Microsoft reinforces warns of the critical role Active Directory (AD) domain controllers play in large-scale ransomware attacks, aligning with U.S. government advisories on the persistent threat of AD compromise.

  • educators seated at a table with a laptop and tablet, against a backdrop of muted geometric shapes

    HMH Forms Educator Council to Inform AI Tool Development

    Adaptive learning company HMH has established an AI Educator Council that brings together teachers, instructional coaches and leaders from school district across the country to help shape its AI solutions.