The post Senators Introduce Bill to Ban AI Companions for Minors Over Mental Health Fears appeared on BitcoinEthereumNews.com. In brief The bill targets AI chatbots and companions marketed to minors. Data has shown widespread teen use of AI for emotional support and relationships. Critics say companies have failed to protect young users from manipulation and harm. A bipartisan group of U.S. senators on Tuesday introduced a bill to restrict how artificial intelligence models can interact with children, warning that AI companions pose serious risks to minors’ mental health and emotional well-being. The legislation, called the GUARD Act, would ban AI companions for minors, require chatbots to clearly identify themselves as non-human, and create new criminal penalties for companies whose products aimed at minors solicit or generate sexual content. “In their race to the bottom, AI companies are pushing treacherous chatbots at kids and looking away when their products cause sexual abuse, or coerce them into self-harm or suicide,” said Sen. Richard Blumenthal (D-Conn.), one of the bill’s co-sponsors, in a statement. “Our legislation imposes strict safeguards against exploitative or manipulative AI, backed by tough enforcement with criminal and civil penalties,” he added. “Big Tech has betrayed any claim that we should trust companies to do the right thing on their own when they consistently put profit first ahead of child safety.”  The scale of the issue is sobering. A July survey by Common Sense Media found that 72% of teens have used AI companions, and more than half use them at least a few times a month. About one in three said they use AI for social or romantic interaction, emotional support, or conversation practice—and many reported that chats with AI felt as meaningful as those with real friends. An equal amount also said they turned to AI companions instead of humans to discuss serious or personal issues. Concerns have deepened as lawsuits mount against major AI… The post Senators Introduce Bill to Ban AI Companions for Minors Over Mental Health Fears appeared on BitcoinEthereumNews.com. In brief The bill targets AI chatbots and companions marketed to minors. Data has shown widespread teen use of AI for emotional support and relationships. Critics say companies have failed to protect young users from manipulation and harm. A bipartisan group of U.S. senators on Tuesday introduced a bill to restrict how artificial intelligence models can interact with children, warning that AI companions pose serious risks to minors’ mental health and emotional well-being. The legislation, called the GUARD Act, would ban AI companions for minors, require chatbots to clearly identify themselves as non-human, and create new criminal penalties for companies whose products aimed at minors solicit or generate sexual content. “In their race to the bottom, AI companies are pushing treacherous chatbots at kids and looking away when their products cause sexual abuse, or coerce them into self-harm or suicide,” said Sen. Richard Blumenthal (D-Conn.), one of the bill’s co-sponsors, in a statement. “Our legislation imposes strict safeguards against exploitative or manipulative AI, backed by tough enforcement with criminal and civil penalties,” he added. “Big Tech has betrayed any claim that we should trust companies to do the right thing on their own when they consistently put profit first ahead of child safety.”  The scale of the issue is sobering. A July survey by Common Sense Media found that 72% of teens have used AI companions, and more than half use them at least a few times a month. About one in three said they use AI for social or romantic interaction, emotional support, or conversation practice—and many reported that chats with AI felt as meaningful as those with real friends. An equal amount also said they turned to AI companions instead of humans to discuss serious or personal issues. Concerns have deepened as lawsuits mount against major AI…

Senators Introduce Bill to Ban AI Companions for Minors Over Mental Health Fears

2025/10/30 14:34
4분 읽기
이 콘텐츠에 대한 의견이나 우려 사항이 있으시면 crypto.news@mexc.com으로 연락주시기 바랍니다

In brief

  • The bill targets AI chatbots and companions marketed to minors.
  • Data has shown widespread teen use of AI for emotional support and relationships.
  • Critics say companies have failed to protect young users from manipulation and harm.

A bipartisan group of U.S. senators on Tuesday introduced a bill to restrict how artificial intelligence models can interact with children, warning that AI companions pose serious risks to minors’ mental health and emotional well-being.

The legislation, called the GUARD Act, would ban AI companions for minors, require chatbots to clearly identify themselves as non-human, and create new criminal penalties for companies whose products aimed at minors solicit or generate sexual content.

“In their race to the bottom, AI companies are pushing treacherous chatbots at kids and looking away when their products cause sexual abuse, or coerce them into self-harm or suicide,” said Sen. Richard Blumenthal (D-Conn.), one of the bill’s co-sponsors, in a statement.

“Our legislation imposes strict safeguards against exploitative or manipulative AI, backed by tough enforcement with criminal and civil penalties,” he added. “Big Tech has betrayed any claim that we should trust companies to do the right thing on their own when they consistently put profit first ahead of child safety.”

The scale of the issue is sobering. A July survey by Common Sense Media found that 72% of teens have used AI companions, and more than half use them at least a few times a month. About one in three said they use AI for social or romantic interaction, emotional support, or conversation practice—and many reported that chats with AI felt as meaningful as those with real friends. An equal amount also said they turned to AI companions instead of humans to discuss serious or personal issues.

Concerns have deepened as lawsuits mount against major AI companies over their products’ alleged roles in teen self-harm and suicide. Among them, the parents of 16-year-old Adam Raine—who discussed suicide with ChatGPT before taking his life—have filed a wrongful death lawsuit against OpenAI.

The company drew criticism for its legal response, which included requests for the attendee list and eulogies from the teen’s memorial. Lawyers for the family called their actions “intentional harassment.”

“AI is moving faster than any technology we’ve dealt with, and we’re already seeing its impact on behavior, belief, and emotional health,” said Shady El Damaty, co-founder of Holonym and a digital rights advocate, told Decrypt.

“This is starting to look more like the nuclear arms race than the iPhone era. We’re talking about tech that can shift how people think, that needs to be treated with serious, global accountability.”

El Damaty added that rights for users are essential to ensure users’ safety. “If you build tools that affect how people live and think, you’re responsible for how those tools are used,” he said.

The issue extends beyond minors. This week OpenAI disclosed that 1.2 million users discuss suicide with ChatGPT every week, representing 0.15% of all users. Nearly half a million display explicit or implicit suicidal intent, another 560,000 show signs of psychosis or mania weekly, and over a million users exhibit heightened emotional attachment to the chatbot, according to company data.

Forums on Reddit and other platforms have also sprung up for AI users who say they are in romantic relationships with AI bots. In these groups, users describe their relationships with AI “boyfriends” and “girlfriends,” as well as share AI generated images of themselves and their “partners.”

In response to growing scrutiny, OpenAI this month formed an Expert Council on Well-Being and AI, made up of academics and nonprofit leaders to help guide how its products handle mental health interactions. The move came alongside word from CEO Sam Altman that the company will begin relaxing restrictions on adult content in December.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.

Source: https://decrypt.co/346624/senators-introduce-bill-ban-ai-companions-minors-mental-health-fears

시장 기회
Comedian 로고
Comedian 가격(BAN)
$0.05568
$0.05568$0.05568
-3.66%
USD
Comedian (BAN) 실시간 가격 차트
면책 조항: 본 사이트에 재게시된 글들은 공개 플랫폼에서 가져온 것으로 정보 제공 목적으로만 제공됩니다. 이는 반드시 MEXC의 견해를 반영하는 것은 아닙니다. 모든 권리는 원저자에게 있습니다. 제3자의 권리를 침해하는 콘텐츠가 있다고 판단될 경우, crypto.news@mexc.com으로 연락하여 삭제 요청을 해주시기 바랍니다. MEXC는 콘텐츠의 정확성, 완전성 또는 시의적절성에 대해 어떠한 보증도 하지 않으며, 제공된 정보에 기반하여 취해진 어떠한 조치에 대해서도 책임을 지지 않습니다. 본 콘텐츠는 금융, 법률 또는 기타 전문적인 조언을 구성하지 않으며, MEXC의 추천이나 보증으로 간주되어서는 안 됩니다.

$30,000 in PRL + 15,000 USDT

$30,000 in PRL + 15,000 USDT$30,000 in PRL + 15,000 USDT

Deposit & trade PRL to boost your rewards!