Character.AI Rolls Out New Parental Insights Feature Amid Safety Concerns

Chatbot platform Character.AI has introduced a new Parental Insights feature aimed at giving parents a window into their children's activity on the platform. The feature allows users under 18 to share a weekly report of their chatbot interactions directly with a parent's e-mail address.

The move comes as the company, which has faced criticism and multiple lawsuits over its handling of minors' safety, seeks to bolster its parental oversight tools and ensure its platform is used more responsibly.

Parental Insights was designed to provide parents with an overview of their child's activity on Character.AI without sharing specific chat logs or conversations. According to the company, the weekly report includes key details such as the average daily time a child spends on both the web and mobile platforms, the characters they interact with most frequently, and how much time they spend chatting with each of those characters.

"We are a small team here at Character.AI, but many of us are parents who know firsthand the challenge of navigating new technologies while raising teenagers," the company said in a blog post. "Over the past year, we have rolled out a suite of new safety features across our platform, designed specifically with teens in mind. These features include a separate model for our teen users, improvements to our detection and intervention systems for human behavior and model responses, and more."

The feature is optional, and teens can activate or deactivate it via their account settings. Once set up, parents can receive the reports automatically without needing to create an account on the platform themselves. If a teen wishes to revoke parental access to this data at any point, they can do so, but the request will require confirmation from the parent.

The platform, which allows users to create and interact with customized AI chatbots, has been widely popular among teenagers, but its content moderation policies have been called into question after reports of bots offering content that could be potentially dangerous.

In response to these concerns, Character.AI has implemented several safety features over the past year. These include a new model tailored to users under 18 that is trained to avoid sensitive or inappropriate output, as well as clear notifications that remind users their interactions are with AI, not real people. The platform has also introduced time-spent alerts and restrictions on sensitive content, aiming to foster a safer environment for younger users.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured

  • Ativion StudentKeeper

    New Platform Combines Content Filtering, Classroom Management, Device Management Tools

    Ed tech, remote access, and cybersecurity solution provider Ativion has introduced StudentKeeper, an all-in-one platform that encompasses digital safety management, filtering, and reporting tools for supporting and protecting students.

  • school building protected by a glowing blue shield with circuit patterns, blocking red-orange cyber threat icons

    Establishing a Proactive Defense Against Evolving Cyber Threats

    Here are six good starting points for K-12 districts that want to improve their cybersecurity mitigation strategies and take a more proactive approach to mitigating risk.

  • SXSW EDU

    SXSW EDU 2025: Where K-12 Meets the Future of Education

    Join education's most passionate community this March 3-6, 2025 at a special 15th-annual SXSW EDU Conference & Festival in Austin, Texas.

  • The AI Show

    Register for Free to Attend the World's Greatest Show for All Things AI in EDU

    The AI Show @ ASU+GSV, held April 5–7, 2025, at the San Diego Convention Center, is a free event designed to help educators, students, and parents navigate AI's role in education. Featuring hands-on workshops, AI-powered networking, live demos from 125+ EdTech exhibitors, and keynote speakers like Colin Kaepernick and Stevie Van Zandt, the event offers practical insights into AI-driven teaching, learning, and career opportunities. Attendees will gain actionable strategies to integrate AI into classrooms while exploring innovations that promote equity, accessibility, and student success.