The post Senators Introduce Bill to Ban AI Companions for Minors Over Mental Health Fears appeared on BitcoinEthereumNews.com. In brief The bill targets AI chatbots and companions marketed to minors. Data has shown widespread teen use of AI for emotional support and relationships. Critics say companies have failed to protect young users from manipulation and harm. A bipartisan group of U.S. senators on Tuesday introduced a bill to restrict how artificial intelligence models can interact with children, warning that AI companions pose serious risks to minors’ mental health and emotional well-being. The legislation, called the GUARD Act, would ban AI companions for minors, require chatbots to clearly identify themselves as non-human, and create new criminal penalties for companies whose products aimed at minors solicit or generate sexual content. “In their race to the bottom, AI companies are pushing treacherous chatbots at kids and looking away when their products cause sexual abuse, or coerce them into self-harm or suicide,” said Sen. Richard Blumenthal (D-Conn.), one of the bill’s co-sponsors, in a statement. “Our legislation imposes strict safeguards against exploitative or manipulative AI, backed by tough enforcement with criminal and civil penalties,” he added. “Big Tech has betrayed any claim that we should trust companies to do the right thing on their own when they consistently put profit first ahead of child safety.”  The scale of the issue is sobering. A July survey by Common Sense Media found that 72% of teens have used AI companions, and more than half use them at least a few times a month. About one in three said they use AI for social or romantic interaction, emotional support, or conversation practice—and many reported that chats with AI felt as meaningful as those with real friends. An equal amount also said they turned to AI companions instead of humans to discuss serious or personal issues. Concerns have deepened as lawsuits mount against major AI… The post Senators Introduce Bill to Ban AI Companions for Minors Over Mental Health Fears appeared on BitcoinEthereumNews.com. In brief The bill targets AI chatbots and companions marketed to minors. Data has shown widespread teen use of AI for emotional support and relationships. Critics say companies have failed to protect young users from manipulation and harm. A bipartisan group of U.S. senators on Tuesday introduced a bill to restrict how artificial intelligence models can interact with children, warning that AI companions pose serious risks to minors’ mental health and emotional well-being. The legislation, called the GUARD Act, would ban AI companions for minors, require chatbots to clearly identify themselves as non-human, and create new criminal penalties for companies whose products aimed at minors solicit or generate sexual content. “In their race to the bottom, AI companies are pushing treacherous chatbots at kids and looking away when their products cause sexual abuse, or coerce them into self-harm or suicide,” said Sen. Richard Blumenthal (D-Conn.), one of the bill’s co-sponsors, in a statement. “Our legislation imposes strict safeguards against exploitative or manipulative AI, backed by tough enforcement with criminal and civil penalties,” he added. “Big Tech has betrayed any claim that we should trust companies to do the right thing on their own when they consistently put profit first ahead of child safety.”  The scale of the issue is sobering. A July survey by Common Sense Media found that 72% of teens have used AI companions, and more than half use them at least a few times a month. About one in three said they use AI for social or romantic interaction, emotional support, or conversation practice—and many reported that chats with AI felt as meaningful as those with real friends. An equal amount also said they turned to AI companions instead of humans to discuss serious or personal issues. Concerns have deepened as lawsuits mount against major AI…

Senators Introduce Bill to Ban AI Companions for Minors Over Mental Health Fears

In brief

  • The bill targets AI chatbots and companions marketed to minors.
  • Data has shown widespread teen use of AI for emotional support and relationships.
  • Critics say companies have failed to protect young users from manipulation and harm.

A bipartisan group of U.S. senators on Tuesday introduced a bill to restrict how artificial intelligence models can interact with children, warning that AI companions pose serious risks to minors’ mental health and emotional well-being.

The legislation, called the GUARD Act, would ban AI companions for minors, require chatbots to clearly identify themselves as non-human, and create new criminal penalties for companies whose products aimed at minors solicit or generate sexual content.

“In their race to the bottom, AI companies are pushing treacherous chatbots at kids and looking away when their products cause sexual abuse, or coerce them into self-harm or suicide,” said Sen. Richard Blumenthal (D-Conn.), one of the bill’s co-sponsors, in a statement.

“Our legislation imposes strict safeguards against exploitative or manipulative AI, backed by tough enforcement with criminal and civil penalties,” he added. “Big Tech has betrayed any claim that we should trust companies to do the right thing on their own when they consistently put profit first ahead of child safety.”

The scale of the issue is sobering. A July survey by Common Sense Media found that 72% of teens have used AI companions, and more than half use them at least a few times a month. About one in three said they use AI for social or romantic interaction, emotional support, or conversation practice—and many reported that chats with AI felt as meaningful as those with real friends. An equal amount also said they turned to AI companions instead of humans to discuss serious or personal issues.

Concerns have deepened as lawsuits mount against major AI companies over their products’ alleged roles in teen self-harm and suicide. Among them, the parents of 16-year-old Adam Raine—who discussed suicide with ChatGPT before taking his life—have filed a wrongful death lawsuit against OpenAI.

The company drew criticism for its legal response, which included requests for the attendee list and eulogies from the teen’s memorial. Lawyers for the family called their actions “intentional harassment.”

“AI is moving faster than any technology we’ve dealt with, and we’re already seeing its impact on behavior, belief, and emotional health,” said Shady El Damaty, co-founder of Holonym and a digital rights advocate, told Decrypt.

“This is starting to look more like the nuclear arms race than the iPhone era. We’re talking about tech that can shift how people think, that needs to be treated with serious, global accountability.”

El Damaty added that rights for users are essential to ensure users’ safety. “If you build tools that affect how people live and think, you’re responsible for how those tools are used,” he said.

The issue extends beyond minors. This week OpenAI disclosed that 1.2 million users discuss suicide with ChatGPT every week, representing 0.15% of all users. Nearly half a million display explicit or implicit suicidal intent, another 560,000 show signs of psychosis or mania weekly, and over a million users exhibit heightened emotional attachment to the chatbot, according to company data.

Forums on Reddit and other platforms have also sprung up for AI users who say they are in romantic relationships with AI bots. In these groups, users describe their relationships with AI “boyfriends” and “girlfriends,” as well as share AI generated images of themselves and their “partners.”

In response to growing scrutiny, OpenAI this month formed an Expert Council on Well-Being and AI, made up of academics and nonprofit leaders to help guide how its products handle mental health interactions. The move came alongside word from CEO Sam Altman that the company will begin relaxing restrictions on adult content in December.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.

Source: https://decrypt.co/346624/senators-introduce-bill-ban-ai-companions-minors-mental-health-fears

Market Opportunity
Comedian Logo
Comedian Price(BAN)
$0.13012
$0.13012$0.13012
+0.23%
USD
Comedian (BAN) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

SBI Launches Security Token Bonds With XRP Rewards for Retail Investors

SBI Launches Security Token Bonds With XRP Rewards for Retail Investors

TLDR: SBI will issue Security Token bonds through blockchain instead of traditional depository systems used in Japanese capital markets. Retail investors can trade
Share
Blockonomi2026/02/22 22:29
The Manchester City Donnarumma Doubters Have Missed Something Huge

The Manchester City Donnarumma Doubters Have Missed Something Huge

The post The Manchester City Donnarumma Doubters Have Missed Something Huge appeared on BitcoinEthereumNews.com. MANCHESTER, ENGLAND – SEPTEMBER 14: Gianluigi Donnarumma of Manchester City celebrates the second City goal during the Premier League match between Manchester City and Manchester United at Etihad Stadium on September 14, 2025 in Manchester, England. (Photo by Visionhaus/Getty Images) Visionhaus/Getty Images For a goalkeeper who’d played an influential role in the club’s first-ever Champions League triumph, it was strange to see Gianluigi Donnarumma so easily discarded. Soccer is a brutal game, but the sudden, drastic demotion of the Italian from Paris Saint-Germain’s lineup for the UEFA Super Cup clash against Tottenham Hotspur before he was sold to Manchester City was shockingly brutal. Coach Luis Enrique isn’t a man who minces his words, so he was blunt when asked about the decision on social media. “I am supported by my club and we are trying to find the best solution,” he told a news conference. “It is a difficult decision. I only have praise for Donnarumma. He is one of the very best goalkeepers out there and an even better man. “But we were looking for a different profile. It’s very difficult to take these types of decisions.” The last line has really stuck, especially since it became clear that Manchester City was Donnarumma’s next destination. Pep Guardiola, under whom the Italian will be playing this season, is known for brutally axing goalkeepers he didn’t feel fit his profile. The most notorious was Joe Hart, who was jettisoned many years ago for very similar reasons to Enrique. So how can it be that the Catalan coach is turning once again to a so-called old-school keeper? Well, the truth, as so often the case, is not quite that simple. As Italian soccer expert James Horncastle pointed out in The Athletic, Enrique’s focus on needing a “different profile” is overblown. Lucas Chevalier,…
Share
BitcoinEthereumNews2025/09/18 07:38
Picoin Focuses on Real Utility as Pi Network Strengthens Web3 Ecosystem Strategy

Picoin Focuses on Real Utility as Pi Network Strengthens Web3 Ecosystem Strategy

In an industry often dominated by price volatility and speculative trading, a growing number of blockchain projects are reevaluating their long term strategies.
Share
Hokanews2026/02/22 22:29