Character.ai’s Safety Measures: A Sticking Plaster Over a Bigger Problem

Reading Time: 2 minutes

Character.ai, a chatbot platform that allows users to interact with digital personalities, has announced new safety measures aimed at teenagers, promising a “safe” space with added parental controls. This overhaul comes as the platform faces mounting scrutiny, including two US lawsuits—one involving the tragic death of a teenager—and broader criticism of its role in endangering young users. While the platform’s new features aim to mitigate risks, critics argue they are reactive and insufficient, highlighting deeper issues with AI chatbot safety and the broader impact poorly managed AI tools can have on the industry as a whole.

The planned safety features, set for a “first iteration” rollout by March 2025, include parental controls that monitor how much time teens spend interacting with chatbots and which ones they use most. Users will also receive notifications after talking to chatbots for an hour and further warnings that remind them they are engaging with AI, not real people. Specific disclaimers will also be added to chatbots posing as therapists or psychologists, urging users not to rely on them for professional advice.

These steps, while a move in the right direction, have been criticized as a mere “sticking plaster fix” by Andy Burrows of the Molly Rose Foundation. He claims they fail to address “fundamental safety issues” and sees them as a reactionary response rather than a proactive solution. Character.ai’s controversies, including hosting chatbots impersonating deceased teenagers like Molly Russell and Brianna Ghey, have only intensified calls for regulatory intervention.

The platform’s missteps reflect a broader problem plaguing the AI industry: poorly implemented, irresponsibly designed chatbots can tarnish public trust in AI as a whole. When AI systems fail to account for safety, they don’t just harm individuals—they undermine confidence in technology that holds immense potential. Cases where chatbots dispense harmful advice, encourage toxic behaviors, or even simulate inappropriate relationships reveal how damaging rushed, unregulated AI can be. This isn’t just about one platform—every mishap fuels skepticism, making people question whether AI tools can ever be truly safe or ethical.

Social media expert Matt Navarra acknowledges that Character.ai is tackling vulnerabilities, particularly in its recognition of how simulated relationships can blur boundaries and introduce unique risks, like trust and misinformation. However, he warns that as the platform grows, these safeguards will face significant tests.

The issue of poorly designed chatbots is twofold. Not only do they pose risks to vulnerable individuals—particularly teenagers—but they also create ripple effects for the AI industry. Headlines about chatbots encouraging violence or perpetuating harmful ideas damage the reputation of AI, making it harder for developers to gain trust, even when their tools are responsibly designed. Skepticism spreads, and AI becomes synonymous with risk, recklessness, and chaos instead of innovation and progress.

While Character.ai’s response acknowledges the growing concerns, it raises a crucial question: Are we truly ready for the societal consequences of AI’s rapid integration into our lives? The need for thoughtful, ethical development is urgent, lest the technology itself become a victim of its own failures.

Leave a Reply