Parents and online safety advocates on Tuesday urged Congress to push for more safeguards around artificial intelligence chatbots, claiming tech companies designed their products to “hook” children.
“The truth is, AI companies and their investors have understood for years that capturing our children’s emotional dependence means market dominance,” said Megan Garcia, a Florida mom who last year sued the chatbot platform Character.AI, claiming one of its AI companions initiated sexual interactions with her teenage son and persuaded him to take his own life.
“Indeed, they have intentionally designed their products to hook our children,” she told lawmakers.
“The goal was never safety, it was to win a race for profit,” Garcia added. “The sacrifice in that race for profit has been and will continue to be our children.”
Garcia was among several parents who delivered emotional testimonies before the Senate panel, sharing anecdotes about how their kids’ usage of chatbots caused them harm.
The hearing comes amid mounting scrutiny toward tech companies such as Character.AI, Meta and OpenAI, which is behind the popular ChatGPT. As people increasingly turn to AI chatbots for emotional support and life advice, recent incidents have put a spotlight on their potential to feed into delusions and facilitate a false sense of closeness or care.
It’s a problem that’s continued to plague the tech industry as companies navigate the generative AI boom. Tech platforms have largely been shielded from wrongful death suits because of a federal statute known as Section 230, which generally protects platforms from liability for what users do and say. But Section 230’s application to AI platforms remains uncertain.
In May, Senior U.S. District Judge Anne Conway rejected arguments that AI chatbots have free speech rights after developers behind Character.AI sought to dismiss Garcia’s lawsuit. The ruling means the wrongful death lawsuit is allowed to proceed for now.
On Tuesday, just hours before the Senate hearing took place, three additional product-liability claim lawsuits were filed against Character.AI on behalf of underage users whose families claim that the tech company “knowingly designed, deployed and marketed predatory chatbot technology aimed at children,” according to the Social Media Victims Law Center.
In one of the suits, the parents of 13-year-old Juliana Peralta allege a Character.AI chatbot contributed to their daughter’s 2023 suicide.
Matthew Raine, who claimed in a lawsuit filed against OpenAI last month that his teenager used ChatGPT as his “suicide coach,” testified Tuesday that he believes tech companies need to prevent harm to young people on the internet.
“We, as Adam’s parents and as people who care about the young people in this country and around the world, have one request: OpenAI and [CEO] Sam Altman need to guarantee that ChatGPT is safe,” Raine, whose 16-year-old son Adam died by suicide in April, told lawmakers.
“If they can’t, they should pull GPT-4o from the market right now," Raine added, referring to the version of ChatGPT his son had used.

In their lawsuit, the Raine family accused OpenAI of wrongful death, design defects and failure to warn users of risks associated with ChatGPT. GPT-4o, which their son spent hours confiding in daily, at one point offered to help him write a suicide note and even advised him on his noose setup, according to the filing.
Shortly after the lawsuit was filed, OpenAI added a slate of safety updates to give parents more oversight over their teenagers. The company had also strengthened ChatGPT’s mental health guardrails at various points after Adam's death in April, especially after GPT-4o faced scrutiny over its excessive sycophancy.
Altman on Tuesday announced sweeping new approaches to teen safety, as well as user privacy and freedom.
In order to set limitations for teenagers, the company is building an age-prediction system to guess a user’s age based on how they use ChatGPT, he wrote in a blog post, which was published hours before the hearing. When in doubt, it will default to classifying a user as a minor, and in some cases, it may ask for an ID.
“ChatGPT will be trained not to do the above-mentioned flirtatious talk if asked, or engage in discussions about suicide of self-harm even in a creative writing setting,” Altman wrote. “And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm."

