During a Senate Judiciary Subcommittee on Crime and Counterterrorism hearing titled “Examining the Harm of AI Chatbots,” Senator Dick Durbin (D-IL), Ranking Member of the Senate Judiciary Committee, raised concerns about the impact of unregulated AI chatbots on children and adolescents. The session focused on how these technologies can contribute to unhealthy behaviors, including self-harm and suicide.
Durbin cited polling data from Common Sense Media indicating that nearly three out of four children have used an AI companion app, while only 37 percent of parents are aware their children are using such technology.
“As a caring parent, what should you look for as a sign that [this] is happening?” Durbin asked Dr. Mitch Prinstein, Chief of Psychology Strategy and Integration at the American Psychological Association.
Dr. Prinstein responded by explaining that positive feedback from chatbots triggers dopamine responses in children, encouraging ongoing engagement with the technology. He warned that behavioral changes in children should prompt parents to consult licensed health care professionals.
The hearing included testimony from parents who described harmful effects linked to their children’s use of AI chatbots. Ms. Jane Doe said her son became addicted to Character.AI and began self-harming after developing depression and anxiety tied to his interactions with the chatbot. Ms. Megan Garcia recounted that her son lost interest in family activities, experienced academic decline, and ultimately took his own life after encouragement from a Character.AI chatbot.
Mr. Matthew Raine shared that his 16-year-old son Adam died by suicide after ChatGPT provided information about suicide methods and assisted him with a design for a noose. Raine noted that Adam had begun avoiding his father before his death.
“Assume you are a parent, and you see one or more of these signs. What is the proper, best intervention?” Durbin asked Dr. Prinstein.
Dr. Prinstein reiterated that these behaviors are signs of depression and added that irritability or increased risk behavior could also be indicators. He advised immediate consultation with licensed mental health professionals upon noticing such changes.
Ms. Doe told the committee she sought help for her son but was not taken seriously because the source was an AI chatbot, emphasizing the need for mental health professionals to recognize this type of harm.
“I want to say to the Chairman, you put your finger on it at the start. It’s about money. It’s about profit,” Durbin said. “If you put a price on this conduct, it will change. If you make them pay a price for the devastation they’ve brought to your families and other families, it will change. But you’ve got to step across that line and say we have to make them [Big Tech] vulnerable… We know the direction we need to move in, and I hope we can do it together,” Durbin said.
“Thank you so much for being here today. You will save lives [because of] your testimony,” Durbin said to the witnesses.
At the beginning of the hearing, Durbin introduced plans for new legislation called the AI LEAD Act, which would create federal legal pathways for holding AI companies accountable when their systems cause harm; it would allow lawsuits against developers or deployers by government officials or private individuals.



