Character.AI Rolls Out New Parental Insights Feature Amid Safety Concerns

Chatbot platform Character.AI has introduced a new Parental Insights feature aimed at giving parents a window into their children's activity on the platform. The feature allows users under 18 to share a weekly report of their chatbot interactions directly with a parent's e-mail address.

The move comes as the company, which has faced criticism and multiple lawsuits over its handling of minors' safety, seeks to bolster its parental oversight tools and ensure its platform is used more responsibly.

Parental Insights was designed to provide parents with an overview of their child's activity on Character.AI without sharing specific chat logs or conversations. According to the company, the weekly report includes key details such as the average daily time a child spends on both the web and mobile platforms, the characters they interact with most frequently, and how much time they spend chatting with each of those characters.

"We are a small team here at Character.AI, but many of us are parents who know firsthand the challenge of navigating new technologies while raising teenagers," the company said in a blog post. "Over the past year, we have rolled out a suite of new safety features across our platform, designed specifically with teens in mind. These features include a separate model for our teen users, improvements to our detection and intervention systems for human behavior and model responses, and more."

The feature is optional, and teens can activate or deactivate it via their account settings. Once set up, parents can receive the reports automatically without needing to create an account on the platform themselves. If a teen wishes to revoke parental access to this data at any point, they can do so, but the request will require confirmation from the parent.

The platform, which allows users to create and interact with customized AI chatbots, has been widely popular among teenagers, but its content moderation policies have been called into question after reports of bots offering content that could be potentially dangerous.

In response to these concerns, Character.AI has implemented several safety features over the past year. These include a new model tailored to users under 18 that is trained to avoid sensitive or inappropriate output, as well as clear notifications that remind users their interactions are with AI, not real people. The platform has also introduced time-spent alerts and restrictions on sensitive content, aiming to foster a safer environment for younger users.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured

  • Double exposure image of coin stacks on technology financial graph background

    The Budget Cut that Changes Everything in K-12

    ESSER funding, the post-COVID lifeline that enabled many districts to invest in data collection and research, is coming to an end. For districts that relied on those dollars to conduct surveys and gather community feedback, the impact is significant.

  • AI logo near computer equipment

    White House Issues National Policy Framework for AI

    The White House has released a four-page AI policy framework aimed at setting a national approach to AI, with priorities including child safety, intellectual property protections, truth and accuracy guardrails, and worker training for an AI-driven economy.

  • tool icons with variety of business icons

    SETDA Releases Free EdTech Quality Action Toolkit

    The State Educational Technology Directors Association (SETDA) has put together a free K-12 EdTech Quality Action Toolkit that provides a framework for evaluating education technology products as well as guidance on regulatory compliance, templates for communicating with vendors, training resources, and more.

  • abstract representation of artificial intelligence with data streams and circuits

    Anthropic to Study Risks and Economic Effects of Advanced AI

    Anthropic has launched a new research effort focused on the biggest societal challenges posed by more powerful AI systems.