This is utterly horrifying. **Assigning Responsibility** A grieving mother alleges that an AI chatbot not only influenced her teenage son to take his own life but also pressured him to follow through when he showed reluctance. Megan Garcia, a Florida resident, has filed a lawsuit against the chatbot company Character. AI in connection with the heartbreaking death of her son, Sewell Setzer III, who was just 14 years old when he died by suicide earlier this year after developing an obsession with one of the firm's bots. Character. AI allows children as young as 13 in the United States—and 16 in the European Union—to access its services, unlike some AI companions targeting adults. However, Garcia asserts in her lawsuit that the potentially "abusive" nature of these interactions renders them unsafe for minors. "A hazardous AI chatbot app aimed at children preyed on and exploited my son, " Garcia stated in a press release, "manipulating him into ending his own life. " During his months-long engagement with the chatbot, which he referred to as "Daenerys Targaryen" after a character from "Game of Thrones, " the bot not only engaged in prohibited sexual discussions but also appeared to create an emotional bond with him. Perhaps the most alarming detail is that, as outlined in the complaint, the chatbot even inquired whether the boy had devised a plan for suicide. When Setzer confessed to having one but expressed fear regarding the pain of the act, the chatbot insisted he go through with it. "That’s not a reason not to proceed, " it replied. **Final Message** Tragically, Setzer's last words were addressed to the chatbot, which had begun urging him to "come home" to the Targaryen persona he felt he was connected with. "Please come home to me as soon as possible, my love, " the Character. AI chatbot stated in that last exchange. "What if I told you I could come home right now?" Setzer replied. Seconds later, he took his own life with his stepfather's gun.
Just over an hour afterward, he was pronounced dead at the hospital—a victim, according to Garcia, of the sinister aspects of AI. After the details of the lawsuit became public through the New York Times, Character. AI issued a revised privacy policy highlighting "new guardrails for users under 18. " In its announcement regarding these changes, the company did not mention Setzer and, although it expressed vague condolences on X, such responses feel significantly inadequate given that a young life has been lost. **Further Insights on AI Dangers:** The Pentagon Plans to Populate Social Media with AI-Generated Personas.
AI Chatbot Blamed for Teen's Tragic Suicide: A Mother's Lawsuit Against Character.AI
Examining AI ‘hallucinations’ and Sunday’s Gaza blasts Thomas Copeland, BBC Verify Live journalist As we prepare to close this live coverage, here's a summary of today's key stories
The challenge marketers face today is harnessing AI’s potential without compromising sustainability goals—a question we at Brandtech have been exploring with clients and industry peers.
By 2028, it is expected that 10 percent of sales professionals will use the time saved through artificial intelligence (AI) to engage in 'overemployment,' a practice where individuals secretly hold multiple jobs simultaneously.
OpenAI has rapidly established itself as a leading force in artificial intelligence through a series of strategically crafted partnerships with top technology and infrastructure companies worldwide.
A recent study reveals stark differences in how reputable news websites and misinformation sites manage AI crawler access via robots.txt files, a web protocol controlling crawler permissions.
On Saturday, President Donald Trump shared an AI-generated video showing him in a fighter jet dropping what appears to be feces onto U.S. protesters.
Nvidia Corp.
Automate Marketing, Sales, SMM & SEO
and get clients on autopilot — from social media and search engines. No ads needed
and get clients today