Dive Brief:
- Character.AI and Google have agreed to mediate a settlement Wednesday with the mother of a 14-year-old, Sewell Setzer III, who died by suicide after interacting with Character.AI’s artificial intelligence companions — social chatbots designed to use human-like features and develop relationships with users.
- In an October 2024 wrongful death lawsuit against the tech companies, the mother, Megan Garcia, alleged Character.AI was negligent in its “unreasonably dangerous designs” and that it was “deliberately targeting underage kids.” Garcia added that Character.AI knew its AI companions would be harmful to minors but failed to redesign its app or warn about the product’s dangers.
- Google and Character.AI agreed to settle a similar case with a Colorado family over the wrongful death of their 13-year-old daughter, Juliana Peralta. The pending settlements come as a similar lawsuit challenges ChatGPT, filed by the Social Media Victims Law Center and Tech Justice Law Project on behalf of a family who alleges generative AI tools led to their child’s suicide.
Dive Insight:
Lawsuits filed by families — like Garcia’s — against tech companies “are tragic reminders” that AI chatbots aren’t safe for children and teens when they interact with AI for emotional and mental health supports, said Robbie Torney, head of AI and digital assessments at Common Sense Media.
But, Torney said, these legal cases show just “the tip of the iceberg” on this issue, especially when millions of teens are using AI products nationwide — whether those are tools integrated into social media apps they already use like Instagram or in standalone AI tools like ChatGPT.
Schools also have a role to play in helping students responsibly and safely use this rapidly developing technology by promoting AI literacy, Torney said. K-12 leaders first need to recognize that children and teens are commonly using AI for emotional and mental health support, he said.
Such conversations with students can include asking them what they like about using AI while also helping them understand what supports they may be missing out on when they use the technology for companionship instead of humans, Torney said.
These lessons can help school leaders stress to students that AI companions don't “act like a real person” who can help keep them safe in times of crisis and that they don’t have the same benefits as fostering real-world connections with other people.
In a July Common Sense Media survey, 1 in 3 teens reported having used AI companions “for social interaction and relationships, including role-playing, romantic interactions, emotional support, friendship, or conversation practice.” Parents and researchers sounded the alarm throughout 2025 that AI companions can create serious risks for minors, which can include exacerbating mental health conditions like depression, anxiety disorders, ADHD and bipolar disorder.
“So when you have millions of teens using AI for untested purposes — purposes that the systems are not designed for — and you have companies optimizing for engagement and user retention, that just creates a really potent and dangerous situation where you have many young users exposed to risky technology,” Torney said.
The Social Media Victims Law Center, which represented Garcia’s case, declined to comment on the settlement. Google and Character.AI did not respond to requests for comment on Monday.
In September, Garcia said she was the first person in the U.S. to sue an AI company in a wrongful death lawsuit, during a U.S. Senate Judiciary Subcommittee on Crime and Counterterrorism hearing on the harm of AI chatbots.
Amid multiple lawsuits from families, Character.AI in late November banned users under 18 from using its primary open-ended chat feature to interact with AI characters.
The company said in an October announcement it would roll out an “age assurance functionality” to help enforce the ban. Additionally, Character.AI said it will launch and fund the AI Safety Lab, an independent nonprofit focused on safety innovations for new AI entertainment tools.
Still, Torney said that broader policies regulating tech companies are necessary to keep children and teens safe when using AI tools. That should include laws that:
- Require age assurance online.
- Ban targeted advertising to children and the selling of their data.
- Create safeguards that protect teens from dangerous content.
- Prevent AI systems from manipulating children and teens through features commonly seen in AI companions.