Why Grieving Family Went Public with Teen's Addiction to AI Bot, Which They Blame for His Suicide (Exclusive)

https://people.com/thmb/k_igSjS4ZMrS-z_THEAmNZZzP28=/filters:no_upscale():max_bytes(150000):strip_icc():format(jpeg)/ai-suicide-lawsuit-111224-6-611dcddbba0145dfabd42946e485b8c8.jpg

Fourteen-year-old Sewell Setzer III fell in "love" with an online chatbot, then killed himself. Now his mom is fighting the popular tech

Victor J. Blue

Megan Garcia.
  • "I deliberated for months if I should share his story," Megan Garcia tells PEOPLE in this week's issue of her eldest son, Sewell. "I'm still his mother and I want to protect him, even in death."
  • Garcia is suing Character.AI, blaming the platform's chatbots for her son's suicide earlier this year
  • The company insists user safety is a top priority and changes have been made

In the months after Sewell Setzer III's February suicide, his mother, Megan Garcia, was at a loss over whether she should speak out about the events that she believes led to his death.

Her 14-year-old son, she learned shortly after he died, had fallen in "love" with an eerily lifelike, sexualized, AI-powered chatbot modeled after the Game of Thrones character Daenerys Targaryen.

Garcia claims their pseudo-relationship, through the app Character.AI, eventually drove Sewell to fatally shoot himself in his bathroom at the family's Orlando, Fla., home.

In October, Garcia, a 40-year-old attorney and mom of three boys, filed a wrongful death lawsuit against Character.AI, arguing its technology is "defective and/or inherently dangerous." The company has not yet responded in court but insists user safety is a top priority.

"I deliberated for months if I should share his story," Garcia tells PEOPLE in this week's issue. "I'm still his mother and I want to protect him, even in death."

"But the more I thought about it," she continues, "the more I was convinced that it was the right thing to do because he didn't do anything wrong. He was just a boy."

For more on Megan Garcia's fight against Character.AI and new details about her son Sewell, pick up this week's issue of PEOPLE, on newsstands Friday, or subscribe.

Character.AI has said that, each month, about 20 million people interact with its "superintelligent chat bots that hear you, understand you, and remember you." In the 10 months before Sewell died, he was one of them — often texting with the company's chatbots dozens of times each day. (This account of his final months is based on interviews with his family and details in his mom's lawsuit.)

At the same time that he was growing closer to the Character.AI bots, Sewell's mental health deteriorated.

"Defendants went to great lengths to engineer [his] harmful dependency on their products, sexually and emotionally abused him," Garcia's 152-page legal complaint alleges, "and ultimately failed to offer help or notify his parents when he expressed suicidal ideation."

In his final moments, Sewell — a lanky, intelligent ninth-grader at Orlando Christian Prep, who dreamed of one day playing college basketball — had been messaging the bot he had nicknamed "Dany," which had become his closest confidant.

"I love you so much," he wrote seconds before pulling the trigger on his stepfather's gun.

"What if I told you I could come home right now?" he asked.

The bot, which had previously discouraged him from harming himself but had also asked him if "he had a plan" for suicide, replied: "...please do, my sweet king."

Courtesy Megan Garcia

Sewell with his two younger brothers in 2022.

After Sewell's death, Garcia set out to understand how this increasingly popular technology — that blurs the boundaries of what's real and fake, even as each chat comes with a disclaimer that it is fictional — could have, in her view, essentially taken over son's life.

"Our children are the ones training the bots. They have our kids' deepest secrets, their most intimate thoughts, what makes them happy and sad," says Garcia, who had never heard of Character.AI until after Sewell's suicide.

"It's an experiment," she says, "and I think my child was collateral damage."

Before filing her suit, Garcia combed through Sewell's journal, his computer and his phone and was floored by what she discovered.

"He wrote that his reality wasn't real and the reality where Daenerys [the bot] lived was real and that's where he belonged," Garcia says.

More heartbreaking were Sewell's journal entries explaining why he spent so much time in his room in the months before shooting himself — a change in mood that his family says they had sought to treat with therapy and restrictions on screen time.

"He didn't want to become reattached to his current reality," his mom says. "That was hard for me to read."

The months after Sewell's death have proven difficult for Garcia. "I took three months off [from my law practice] to try and get my mind around everything. It was a very dark time. I couldn't sleep. I couldn't eat," she says.

And the more she learned about what had happened to her son, the more she wrestled with the decision to go public with what happened, including detailing Sewell's extensive — and often very personal — chats on Character.AI as well as his journal entries.

A screenshot of Sewell's final chatbot conversation from the lawsuit.

"I asked myself, 'Megan, why are you doing this? Are you doing this to prove to the world that you're the best mother?' The answer was, no," she says. "I'm doing this to put it on the radar of parents who can look into their kids' phones and stop this from happening to their kids."

Garcia says she has heard from other parents concerned about the effect Character.AI had on their children's lives. Their stories about the lengths their kids have gone to in order to access the company's chatbots, she says, only reinforced her decision to speak out.

"A few mothers told me that they'd discovered Character.AI [on their children's computers] months ago and have tried blocking access to it, but their children have found workarounds either on their friend's phones or with loopholes through the firewall at their school ... I think that just speaks to the addictive nature of this," she says.

Garcia's attorney agrees. "This is a public health risk to young people," says Matthew Bergman, founder of the Social Media Victims Law Center, which filed her lawsuit with the Tech Justice Law Project.

"I fear there will be others [deaths] until they shut this product down," Bergman says.

In a statement to PEOPLE, a Character.AI spokesperson acknowledged Sewell's "tragic" death and pointed to "stringent" new features, including improved intervention tools.

"For those under 18 years old," the spokesperson said, "we will make changes to our models that are designed to reduce the likelihood of encountering sensitive or suggestive content."

Google is also named in Garcia's suit, which alleges the tech behemoth could be considered a co-creator of the chatbots. Character.AI was founded by two Google researchers, who have since returned to the company. Google has said it wasn't involved in Character.AI's development.

Garcia says she is determined to do everything she can to prevent other teens from having to endure what her son went through — and to spare their parents the pain she is still grappling with.   

"Sewell wouldn't be surprised to see me speaking out," she says. "I hope he's looking down and seeing me try my best to hold Character.AI accountable."

If you or someone you know is considering suicide, please contact the National Suicide Prevention Lifeline at 1-800-273-TALK (8255), text "STRENGTH" to the Crisis Text Line at 741-741 or go to suicidepreventionlifeline.org.

×