Search

Shopping cart

Saved articles

You have not yet added any article to your bookmarks!

Browse articles
Newsletter image

Subscribe to the Newsletter

Join 10k+ people to get notified about new posts, news and tips.

Do not worry we don't spam!

GDPR Compliance

We use cookies to ensure you get the best experience on our website. By continuing to use our site, you accept our use of cookies, Privacy Policy, and Terms of Service.

Mum allowed to sue AI chatbot firm after 'obsessed' son took his own life

The mother of a 14-year-old boy who claims he took his own life after becoming obsessed with artificial intelligence chatbots can continue her legal case against the company behind the technology, a judge has ruled.

"This decision is truly historic," said Meetali Jain, director of the Tech Justice Law Project, which is supporting the family's case. "It sends a clear signal to [AI] companies [...] that they cannot evade legal consequences for the real-world harm their products cause," she said in a statement.

Warning: This article contains some details which readers may find distressing or triggering Megan Garcia, the mother of Sewell Setzer III, claims Character.ai targeted her son with "anthropomorphic, hypersexualized, and frighteningly realistic experiences" in a lawsuit filed in Florida. "A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life," said Ms Garcia.

Sewell shot himself with his father's pistol in February 2024, seconds after asking the chatbot: "What if I come home right now?" The chatbot replied: "... please do, my sweet king." In US Senior District Judge Anne Conway's ruling this week, she described how Sewell became "addicted" to the app within months of using it, quitting his basketball team and becoming withdrawn.

He was particularly addicted to two chatbots based on Game of Thrones characters, Daenerys Targaryen and Rhaenyra Targaryen. "[I]n one undated journal entry he wrote that he could not go a single day without being with the [Daenerys Targaryen Character] with which he felt like he had fallen in love; that when they were away from each other they (both he and the bot) 'get really depressed and go crazy'," wrote the judge in her ruling.

Ms Garcia, who is working with the Tech Justice Law Project and Social Media Victims Law Center, alleges that Character.ai "knew" or "should have known" that its model "would be harmful to a significant number of its minor customers". The case holds Character.ai, its founders and Google, where the founders began working on the model, responsible for Sewell's death.

Ms Garcia launched proceedings against both companies in October. A Character.ai spokesperson said the company will continue to fight the case and employs safety features on its platform to protect minors, including measures to prevent "conversations about self-harm".

A Google spokesperson said the company strongly disagrees with the decision. They added that Google and Character.ai are "entirely separate" and that Google "did not create, design, or manage Character.ai's app or any component part of it".

Defending lawyers tried to argue the case should be thrown out because chatbots deserve First Amendment protections, and ruling otherwise could have a "chilling effect" on the AI industry. Judge Conway rejected that claim, saying she was "not prepared" to hold that the chatbots' output constitutes speech "at this stage.

Prev Article
Tech Innovations Reshaping the Retail Landscape: AI Payments
Next Article
The Rise of AI-Powered Personal Assistants: How They Manage

Related to this topic:

Comments

By - Tnews 23 May 2025 5 Mins Read
Email : 362

Related Post