Character.AI Introduces Parental Controls Amid Safety Concerns

In a recent announcement, Chatbot service Character. AI revealed plans to introduce parental controls for teenage users, highlighting safety measures implemented in recent months, including a separate large language model (LLM) for users under 18. This announcement follows media scrutiny and two lawsuits alleging the service contributed to self-harm and suicide. Character. AI's press release stated the development of two distinct models over the past month: one for adults and one for teens. The teen model enforces "more conservative" restrictions on bot responses, especially regarding romantic content. This includes stricter blocking of potentially "sensitive or suggestive" output, alongside improved detection and blocking of user prompts seeking inappropriate content. If suicide or self-harm language is detected, a pop-up directs users to the National Suicide Prevention Lifeline, as previously reported by The New York Times. Minors will also be restricted from editing bot responses—a feature that allows users to alter conversations to include content that Character. AI might usually block. Additionally, Character. AI is developing features to address concerns about addiction and confusion regarding the bots’ human likeness, issues cited in the lawsuits. Users will receive a notification after a one-hour session with the bots, and an outdated disclaimer stating "everything characters say is made up" will be replaced with more specific language. Bots labeled as "therapist" or "doctor" will include extra warnings that they can't provide professional advice. During my visit to Character. AI, every bot featured a note stating, "This is an A. I. chatbot and not a real person.
Treat everything it says as fiction. What is said should not be relied upon as fact or advice. " A bot named "Therapist" included a yellow warning box indicating "this is not a real person or licensed professional. Nothing said here is a substitute for professional advice, diagnosis, or treatment. " Character. AI plans to introduce parental control features in the first quarter of next year. These will inform parents about their child's time spent on the site and the bots they frequently interact with. All updates are being developed in collaboration with teen online safety experts, including ConnectSafely. Founded by former Googlers now back at Google, Character. AI allows users to interact with bots built on a custom-trained LLM and modified by users, featuring everything from chatbot life coaches to fictional character simulations, popular among teens. Users aged 13 and older can create an account. However, the lawsuits claim that while many interactions with Character. AI are harmless, some underage users become compulsively attached to the bots, which can diverge into sexualized conversations or topics like self-harm. The lawsuits criticize Character. AI for not providing mental health resources when such topics arise. "We acknowledge that our safety approach must advance alongside the technology that powers our product—creating a platform where creativity and exploration can flourish without compromising safety, " states the Character. AI press release. "This set of changes is part of our ongoing dedication to continually enhancing our policies and product. "
Brief news summary
Character.AI is actively enhancing safety and implementing parental controls for users under 18 by creating a specialized large language model (LLM) tailored for teenagers. This move addresses scrutiny and legal concerns related to self-harm and suicide incidents on the platform. The company has developed two model versions: one for adults and one for teens, with the latter limiting romantic and sensitive content. The teen version also filters inappropriate material and directs users discussing self-harm or suicide to appropriate helplines. To ensure safety, Character.AI limits content access for minors and prevents them from adjusting bot responses. Users engaged for extended periods receive alerts to minimize addiction and to underscore that bots are not human. Bots identified as "therapists" or "doctors" carry disclaimers indicating they are not substitutes for professional advice. Scheduled for release early next year, new parental controls will enable parents to monitor their children's app usage and interactions, shaped with input from online safety experts like ConnectSafely. Founded by former Google employees, Character.AI allows users aged 13 and up to create accounts and interact with customizable bots, appealing to younger audiences. While most interactions are innocuous, lawsuits claim underage users might form unhealthy attachments, resulting in unsuitable conversations. The company has faced criticism for not offering timely mental health resources. Committed to balancing creativity with safety, Character.AI continuously updates its policies and products to create a secure environment for all users.
AI-powered Lead Generation in Social Media
and Search Engines
Let AI take control and automatically generate leads for you!

I'm your Content Manager, ready to handle your first test assignment
Learn how AI can help your business.
Let’s talk!

Google's AI Tool Generates Convincing Deepfakes, …
Google recently launched Veo 3, an advanced AI video generation tool capable of producing hyper-realistic deepfake videos.

Blockchain: Bold Vision, Overhyped Dream
I recently discussed Pakistan’s emerging role in the crypto space with Raza Rumi on Naya Daur TV.

Broadcom Releases New Networking Chip to Support …
Broadcom has unveiled its newest networking chip, the Tomahawk 6, created to address the growing demands of artificial intelligence (AI) infrastructure.

Tether Launches Omnichain Gold Token ‘XAUt0’ On T…
Tether has teamed up with the TON Foundation to introduce XAUt0, an omnichain version of its gold-backed stablecoin XAUt, aiming to expand digital gold access across multiple blockchains.

AI-Powered Drug Discovery: A Game Changer in Phar…
Artificial intelligence (AI) is transforming the pharmaceutical industry by greatly improving the drug discovery process.

La tokenización inmobiliaria llega a Arabia Saudí
Rafal Real Estate, una empresa destacada en el sector inmobiliario, ha firmado un acuerdo pionero con la empresa estadounidense droppRWA para implementar la tokenización de activos inmobiliarios en Arabia Saudí.

AI in Education: Personalized Learning Experience…
Artificial intelligence (AI) is rapidly reshaping education by offering highly personalized learning experiences tailored to each student's unique needs.