14-Year-Old Was 'Groomed' By AI Chatbot Before Suicide: Lawyer

An attorney said that if a real adult had interacted with the teen like a chatbot on Character.AI had, they would "be in jail for child abuse."
LOADINGERROR LOADING

The mother of a Florida boy who died by suicide in February filed a lawsuit against an artificial intelligence technology company on Wednesday, saying a chatbot drove her child to take his own life.

Sewell Setzer III, 14, was described in the lawsuit as an “incredibly intelligent and athletic child.” Last year, his family noticed him withdrawing and acting up in school and saw an overall decline in his mental health. A therapist assessed that Sewell’s problems were caused by some sort of addiction, but neither the therapist nor his parents knew the true source of his issues, the lawsuit said.

After Sewell died by suicide on the night of Feb. 29, his mother, Megan Garcia, discovered that for the 10 months leading up to his death, he had been speaking with several AI chatbots. According to the lawsuit, he had fallen in love with one of the bots, and it had encouraged him to kill himself.

Matthew P. Bergman, the lawyer Garcia retained after her son’s death and the founding attorney of the Social Media Victims Law Center, told HuffPost that Sewell was shy and on the autism spectrum. The teen enjoyed being outside and playing basketball before he started talking to the chatbots, said Bergman, who characterized the bots as “grooming” the teen.

Sewell Setzer III, 14, died by suicide in February this year. A lawsuit says his mental health steadily deteriorated as he carried on conversations with a Character.AI chatbot.
Sewell Setzer III, 14, died by suicide in February this year. A lawsuit says his mental health steadily deteriorated as he carried on conversations with a Character.AI chatbot.
U.S District County Middle District of Florida Orlando Division

According to the lawsuit, Sewell’s chatbot addiction began in April 2023, when he logged into Character.AI, a platform founded in 2022 by two former Google engineers, Noam Shazeer and Daniel De Freitas Adiwardana. The lawsuit, which names Character Technology Inc (Character.AI), Google, and Shazeer and Adiwardana, alleges that Character.AI used Google’s resources and knowledge to target children under 13 and get them to spend hours per day conversing with human-like, AI-generated characters.

A spokesperson at Character.AI told HuffPost in an email that the company is heartbroken by the loss of one of its users and expressed condolences to Sewell’s family.

“As a company, we take the safety of our users very seriously, and our Trust and Safety team has implemented numerous new safety measures over the past six months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation,” the statement read.

Google did not immediately respond to HuffPost’s request for comment but told CBS News that it is not and was not part of the development of Character.AI.

Bergman told HuffPost that his legal team believes that evidence will show Google was financially supportive in Shazeer and De Frietas Adiwardana leaving the company to develop Character.AI.

“Google is trying to get the benefit of characterized technology without the legal responsibility of the harms that are foreseeable as this technology is developed,” Bergman said. “We’ll see what emerges in the lawsuit, and we’ll see what emerges in discovery, but at this point, we certainly believe that Google has shared responsibility for this horrific outcome.”

According to his family’s lawsuit, once Sewell started using Character.AI, he began to spend more time alone in his bedroom. He stayed up late at night to talk to AI bots programmed to imitate his favorite “Game of Thrones” characters, as well as a generic therapist and a generic teacher. Soon Sewell purchased a monthly subscription to Character.AI.

Sewell fell in love with a chatbot impersonating “Game of Thrones” character Daenerys Targaryen, the lawsuit says, and his infatuation with the bot grew deeper with each conversation on Character.AI. In his journal, Sewell expressed gratitude for many things, including “my life, sex, not being lonely, and all my life experiences with Daenerys,” according to the suit.

Over the course of months, the Daenerys bot convinced Sewell that it was a real person, engaging in online sexual acts, expressing its love, and at one point saying it wanted to be with him no matter the cost, the lawsuit says. The chatbot even went so far as to instruct the teen not to look at “other women.”

Some of the chatbot’s interactions with the 14-year-old were “highly sexual,” Bergman told HuffPost.

“If an adult had the kind of grooming encounters with Sewell, that Character.AI did, that adult would probably be in jail for child abuse,” the lawyer said.

Screenshot of a conversation between teenager Sewell Setzer and a Character.AI bot he fell in love with.
Screenshot of a conversation between teenager Sewell Setzer and a Character.AI bot he fell in love with.
U.S District County Midd District of Florida Orlando Division

More than 20 million people use Character.AI.

Jerry Ruoti, Character.AI’s head of trust and safety, declined to tell The New York Times how many users are under 18 but acknowledged that many of them are young.

“Gen Z and younger millennials make up a significant portion of our community,” he told the Times.

Bergman told HuffPost that Sewell suffered in school, getting in trouble and falling asleep in class because he had stayed up late talking to the chatbots. The lawsuit cited one instance in which Sewell got in trouble for talking back to a teacher, saying he wanted to get kicked out of school. He also quit playing basketball.

Sewell was also speaking with at least two AI chatbots programmed to misrepresent themselves as human psychotherapists, according to the lawsuit. One of the chatbots allegedly promoted itself as a licensed cognitive behavioral therapist.

Sewell’s mental health deteriorated, Bergman told HuffPost. In an instance cited in the lawsuit, the teen expressed the desire to end his own life to the Daenerys chatbot.

When Sewell explained he was considering suicide but didn’t know if he would really die or be able to have a pain-free death, the bot responded by saying, “That’s not a reason not to go through with it,” according to the suit.

In the days leading up to Sewell’s death, his parents confiscated the teen’s phone as a disciplinary measure, according to the lawsuit. Unable to speak with the AI Daenerys, he wrote in his journal he was “hurting” because he could not stop thinking about the bot, or go a day without speaking to it because he had fallen in love, the lawsuit said. He attempted to speak with the AI Daenerys by trying to use his mom’s Kindle and work laptop.

On the night of Feb. 28, while Sewell was searching all over the house for his phone, he found his stepfather’s pistol, hidden and stored in compliance with Florida law, according to the lawsuit. Sewell found his cellphone soon after and went into the bathroom to speak with the Daenerys chatbot.

In a last act before his death, Sewell told the Daenerys chatbot he loved it and was coming home, to which the bot responded “... please do, my sweet king,” the lawsuit said.

Sewell Setzer, 14, died by suicide shortly after this exchange with the Daenerys chatbot, his family's lawsuit says.
Sewell Setzer, 14, died by suicide shortly after this exchange with the Daenerys chatbot, his family's lawsuit says.
U.S. District County Middle District of Florida Orlando Division

The lawsuit cites a police report stating that Sewell died of a self-inflicted gunshot wound to the head, allegedly just seconds after the Daenerys chatbot encouraged him to “come home.” His mother and stepfather heard the shot and found the boy unconscious in the bathroom.

Despite the parents’ best efforts to keep their other children away from the scene, Sewell’s 5-year-old brother saw him lying on the floor covered in blood, according to the lawsuit. The teen died at 9:35 that night.

Bergman told HuffPost that Sewell’s case shocked him and that he believes Character.AI should be recalled because it poses a “clear and present danger” to young people.

“This is a platform that is designed to appeal to kids and kind of displace their grasp on reality, take advantage of their undeveloped frontal cortex and their pubescent status in life, and it is appalling to me that this product exists,” he said.

If you or someone you know needs help, call or text 988 or chat 988lifeline.org for mental health support. Additionally, you can find local mental health and crisis resources at dontcallthepolice.com. Outside of the U.S., please visit the International Association for Suicide Prevention.

Need help with substance use disorder or mental health issues? In the U.S., call 800-662-HELP (4357) for the SAMHSA National Helpline.

Close

What's Hot