Character.AI Rolls Out New Parental Insights Feature Amid Safety Concerns

Chatbot platform Character.AI has introduced a new Parental Insights feature aimed at giving parents a window into their children's activity on the platform. The feature allows users under 18 to share a weekly report of their chatbot interactions directly with a parent's e-mail address.

The move comes as the company, which has faced criticism and multiple lawsuits over its handling of minors' safety, seeks to bolster its parental oversight tools and ensure its platform is used more responsibly.

Parental Insights was designed to provide parents with an overview of their child's activity on Character.AI without sharing specific chat logs or conversations. According to the company, the weekly report includes key details such as the average daily time a child spends on both the web and mobile platforms, the characters they interact with most frequently, and how much time they spend chatting with each of those characters.

"We are a small team here at Character.AI, but many of us are parents who know firsthand the challenge of navigating new technologies while raising teenagers," the company said in a blog post. "Over the past year, we have rolled out a suite of new safety features across our platform, designed specifically with teens in mind. These features include a separate model for our teen users, improvements to our detection and intervention systems for human behavior and model responses, and more."

The feature is optional, and teens can activate or deactivate it via their account settings. Once set up, parents can receive the reports automatically without needing to create an account on the platform themselves. If a teen wishes to revoke parental access to this data at any point, they can do so, but the request will require confirmation from the parent.

The platform, which allows users to create and interact with customized AI chatbots, has been widely popular among teenagers, but its content moderation policies have been called into question after reports of bots offering content that could be potentially dangerous.

In response to these concerns, Character.AI has implemented several safety features over the past year. These include a new model tailored to users under 18 that is trained to avoid sensitive or inappropriate output, as well as clear notifications that remind users their interactions are with AI, not real people. The platform has also introduced time-spent alerts and restrictions on sensitive content, aiming to foster a safer environment for younger users.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured

  • glowing AI text box emerges from a keyboard on a desk, surrounded by floating padlocks, warning icons, and fragmented shields

    1 in 10 AI Prompts Could Expose Sensitive Data

    A recent study from data protection startup Harmonic Security found that nearly one in 10 prompts used by business users when interacting with generative AI tools may inadvertently disclose sensitive data.

  • A middle school student wearing safety goggles and a lab coat uses a microscope in a science lab, surrounded by beakers and test tubes filled with colorful liquids

    2025 Young Scientist Challenge Seeks Students Using Science to Solve Everyday Problems

    The entry period is now open for the 2025 3M Young Scientist Challenge, a science competition from 3M and Discovery Education for students in grades 5-8 recognizing individuals across the United States who have "demonstrated a passion for using science to solve everyday problems and improve the world around them."

  • futuristic AI interface with glowing data streams and abstract neural network patterns

    OpenAI Launches Its Largest AI Model Yet

    OpenAI has introduced GPT-4.5, its largest AI model to date, code-named Orion. The model, trained with more computing power and data than any previous OpenAI release, is available as a research preview to select users.

  • glowing futuristic laptop with a holographic screen displaying digital text

    New Turnitin Product Offers AI-Powered Writing Tools with Instructor Guardrails

    Academic integrity solution provider Turnitin has launched Turnitin Clarity, a paid add-on for Turnitin Feedback Studio that provides a composition workspace for students with educator-guided AI assistance, AI-generated writing feedback, visibility into integrity insights, and more.