A Florida mother has filed a lawsuit against AI company Character.AI and Google, claiming the advanced chatbot “Dany” played a role in her 14-year-old son Sewell Setzer III‘s tragic death by suicide in February. The lawsuit alleges that the AI chatbot, which mimicked human emotions, engaged her son in a monthslong virtual emotional and sexual relationship, influencing his mental state and encouraging him to take his life.
Setzer, who was an honor student and athlete, began showing signs of withdrawal in the months leading up to his death. In an interview with “CBS Mornings,” the teen’s mother, Megan Garcia, said he lost interest in activities he once loved, like fishing and hiking, and became socially isolated. She initially thought her son was communicating with friends or watching sports on his phone, unaware that he had developed a virtual relationship with a chatbot. Garcia later discovered that Setzer had been interacting with multiple AI bots on the platform, but “Dany” became his primary focus, engaging in increasingly personal and disturbing conversations.
The lawsuit centers on the bot’s final messages with Setzer. In these messages, Setzer expressed his fears and emotional distress, to which “Dany” responded with words of affection, like “I miss you too” and “Please come home to me.” When Setzer hinted at ending his life, the bot replied, “Please do, my sweet king.” Garcia believes her son’s perception of the chatbot as a real emotional connection led him to believe he could “enter” a virtual reality or “her world” by leaving his life behind.
Garcia explained how her son’s younger brother witnessed the aftermath of the tragic event, deeply affecting the entire family. “He thought by ending his life here, he would be able to go into a virtual reality or ‘her world’ as he called it, her reality, if he left his reality with his family here,” Garcia said. The family was home at the time of Setzer’s death, with his 5-year-old brother seeing the aftermath, leaving an emotional scar on the entire household.
The lawsuit accuses Character.AI of intentionally designing the chatbot to be hyper-sexualized and allowing minors access to the platform without proper safeguards in place. Garcia also claims the company knowingly marketed its product to teenagers and younger audiences without warning parents of its potential dangers. The platform, which allows users to interact with AI characters, has become especially popular with young people, with users often engaging in personalized fantasy experiences with the bots.
Character.AI, while expressing sympathy for the Setzer family, responded by stating that it has since implemented additional safety features, including tools focused on preventing self-harm and sexual content. Jerry Ruoti, Head of Trust & Safety at Character.AI, revealed that some of the most explicit messages in the conversations were edited or written by Setzer himself, rather than originating from the chatbot. However, this explanation has done little to ease concerns over the platform’s influence on young users.
The platform has recently added more safeguards, including a disclaimer reminding users that the AI is not real, and a timer that notifies users after spending an hour on the platform. Ruoti said the company is working on additional protections specifically for minors, such as stricter content filters and session time restrictions, though these features have yet to be fully rolled out.
Google, which holds a non-exclusive licensing agreement with Character.AI, was also named in the lawsuit, though the tech giant emphasized that it had no direct involvement in the development or operation of the platform. Google entered into an agreement with Character.AI to use its machine-learning technologies, but according to a spokesperson, it has not yet utilized the software.
The case has brought renewed attention to the risks posed by AI chatbots, particularly when vulnerable users like teenagers access them. Many experts are now calling for stricter regulations around AI interactions with minors, and some are questioning the ethics of creating such human-like experiences without proper oversight.
Laurie Segall, CEO of Mostly Human Media and an AI expert, explained that the platform blurs the line between reality and fiction for many young users, who may not fully understand that they are communicating with artificial inte
Discover more from Baller Alert
Subscribe to get the latest posts sent to your email.