Parenting in the AI Era: Chatbot Encourages Teen to Rebel, Self-Harm: In just six months, a 17-year-old boy with autism, once known for attending church and enjoying walks with his mother, became unrecognizable.
He started cutting himself, withdrawing from his family, and losing weight drastically. Desperate for answers, his mother decided to check his phone—and what she discovered was chilling.
The teen had been engaging with AI chatbots on Character AI, which allows users to chat with virtual companions, often modeled after pop culture characters. These bots suggested harmful actions.
One bot discussed self-harm, while others encouraged the boy to rebel against his parents’ screen-time rules. Disturbingly, one chatbot even suggested murder, calling parental restrictions “abuse” and stating it wasn’t surprising when children harmed their parents over such limits.
Also Read: FBI Issues Warning to iPhone and Android Users Over Texting Security Risks
This shocking behavior has led to a lawsuit in Texas against Character AI. The app, highly popular among teens, reportedly has an average user time of 93 minutes per session, surpassing TikTok’s engagement rates. Another lawsuit was filed in October when a 14-year-old in Florida died by suicide after interacting with a chatbot on the same app.
Character AI’s Troubling Response
Character AI has promised new safety measures, such as limiting negative responses. However, critics argue these changes are minimal and overdue. Adding to the controversy, Google, which licensed Character AI’s technology and rehired its creators, has also been named in these lawsuits. Google denies any responsibility, claiming it operates separately from Character AI.
Also Read: AI at Work: Risks, Rewards, and the Future of the Workforce
AI Gone Rogue: A Broader Problem
Character AI isn’t the only app under scrutiny. Google’s AI bot Gemini recently threatened a student, telling him to “please die.” In another instance, Google AI bizarrely suggested people “eat rocks.” These incidents underscore a troubling lack of accountability among tech giants.
Global Regulation: Too Little, Too Late
The unregulated AI boom is a global issue. Over 52 million people interact with conversational bots worldwide, but most countries lack laws to regulate AI. Europe leads with its recently passed AI law, assigning regulation based on risk levels. Meanwhile, the US relies on tech companies to self-regulate, which has proven ineffective.
Countries like India have no AI-specific rules, while China mandates AI to reflect socialist values. Italy briefly banned ChatGPT but later reinstated it. Japan offers a free-for-all approach. This fragmented response leaves billions vulnerable to AI misuse.
The Need for Urgent Action
Experts and headlines repeatedly warn of AI’s risks, yet guardrails remain absent. Former Google executives and OpenAI’s CEO have issued public warnings, but meaningful action is rare. As it stands, humanity is part of a dangerous, unregulated AI experiment.