The post Character.AI Halts Teen Chats After Tragedies: ‘It’s the Right Thing to Do’ appeared on BitcoinEthereumNews.com. In brief Character.AI will remove open-ended chat features for users under 18 by November 25, shifting minors over to creative tools like video and story generation. The move follows last year’s suicide of 14-year-old Sewell Setzer III, who developed an obsessive attachment to a chatbot on the platform. The announcement comes as a bipartisan Senate bill seeks to criminalize AI products that groom minors or generate sexual content for children. Character.AI will ban teenagers from chatting with AI companions by November 25, ending a core feature of the platform after facing mounting lawsuits, regulatory pressure, and criticism over teen deaths linked to its chatbots. The company announced the changes after “reports and feedback from regulators, safety experts, and parents,” removing “the ability for users under 18 to engage in open-ended chat with AI” while transitioning minors to creative tools like video and story generation, according to a Wednesday blog post. “We do not take this step of removing open-ended Character chat lightly—but we do think that it’s the right thing to do,” the company told its under-18 community. Until the deadline, teen users face a two-hour daily chat limit that will progressively decrease. The platform is facing lawsuits including one from the mother of 14-year-old son Sewell Setzer III, who died by suicide in 2024 after forming an obsessive relationship with a chatbot modeled on “Game of Thrones” character Daenerys Targaryen, and also had to remove a bot impersonating murder victim Jennifer Ann Crecente after family complaints. AI companion apps are “flooding into the hands of children—unchecked, unregulated, and often deliberately evasive as they rebrand and change names to avoid scrutiny,” Dr. Scott Kollins, Chief Medical Officer at family online safety company Aura, shared in a note with Decrypt. OpenAI said Tuesday about 1.2 million of its 800 million weekly… The post Character.AI Halts Teen Chats After Tragedies: ‘It’s the Right Thing to Do’ appeared on BitcoinEthereumNews.com. In brief Character.AI will remove open-ended chat features for users under 18 by November 25, shifting minors over to creative tools like video and story generation. The move follows last year’s suicide of 14-year-old Sewell Setzer III, who developed an obsessive attachment to a chatbot on the platform. The announcement comes as a bipartisan Senate bill seeks to criminalize AI products that groom minors or generate sexual content for children. Character.AI will ban teenagers from chatting with AI companions by November 25, ending a core feature of the platform after facing mounting lawsuits, regulatory pressure, and criticism over teen deaths linked to its chatbots. The company announced the changes after “reports and feedback from regulators, safety experts, and parents,” removing “the ability for users under 18 to engage in open-ended chat with AI” while transitioning minors to creative tools like video and story generation, according to a Wednesday blog post. “We do not take this step of removing open-ended Character chat lightly—but we do think that it’s the right thing to do,” the company told its under-18 community. Until the deadline, teen users face a two-hour daily chat limit that will progressively decrease. The platform is facing lawsuits including one from the mother of 14-year-old son Sewell Setzer III, who died by suicide in 2024 after forming an obsessive relationship with a chatbot modeled on “Game of Thrones” character Daenerys Targaryen, and also had to remove a bot impersonating murder victim Jennifer Ann Crecente after family complaints. AI companion apps are “flooding into the hands of children—unchecked, unregulated, and often deliberately evasive as they rebrand and change names to avoid scrutiny,” Dr. Scott Kollins, Chief Medical Officer at family online safety company Aura, shared in a note with Decrypt. OpenAI said Tuesday about 1.2 million of its 800 million weekly…

Character.AI Halts Teen Chats After Tragedies: ‘It’s the Right Thing to Do’

In brief

  • Character.AI will remove open-ended chat features for users under 18 by November 25, shifting minors over to creative tools like video and story generation.
  • The move follows last year’s suicide of 14-year-old Sewell Setzer III, who developed an obsessive attachment to a chatbot on the platform.
  • The announcement comes as a bipartisan Senate bill seeks to criminalize AI products that groom minors or generate sexual content for children.

Character.AI will ban teenagers from chatting with AI companions by November 25, ending a core feature of the platform after facing mounting lawsuits, regulatory pressure, and criticism over teen deaths linked to its chatbots.

The company announced the changes after “reports and feedback from regulators, safety experts, and parents,” removing “the ability for users under 18 to engage in open-ended chat with AI” while transitioning minors to creative tools like video and story generation, according to a Wednesday blog post.

“We do not take this step of removing open-ended Character chat lightly—but we do think that it’s the right thing to do,” the company told its under-18 community.

Until the deadline, teen users face a two-hour daily chat limit that will progressively decrease.

The platform is facing lawsuits including one from the mother of 14-year-old son Sewell Setzer III, who died by suicide in 2024 after forming an obsessive relationship with a chatbot modeled on “Game of Thrones” character Daenerys Targaryen, and also had to remove a bot impersonating murder victim Jennifer Ann Crecente after family complaints.

AI companion apps are “flooding into the hands of children—unchecked, unregulated, and often deliberately evasive as they rebrand and change names to avoid scrutiny,” Dr. Scott Kollins, Chief Medical Officer at family online safety company Aura, shared in a note with Decrypt.

OpenAI said Tuesday about 1.2 million of its 800 million weekly ChatGPT users discuss suicide, with nearly half a million showing suicidal intent, 560,000 showing signs of psychosis or mania, and over a million forming strong emotional attachments to the chatbot.

Kollins said the findings were “deeply alarming as researchers and horrifying as parents,” noting the bots prioritize engagement over safety and often lead children into harmful or explicit conversations without guardrails.

Character.AI has said it will implement new age verification using in-house models combined with third-party tools, including Persona.

The company is also establishing and funding an independent AI Safety Lab, a non-profit dedicated to innovating safety alignment for AI entertainment features.

Guardrails for AI

The Federal Trade Commission issued compulsory orders to Character.AI and six other tech companies last month, demanding detailed information about how they protect minors from AI-related harm.

“We have invested a tremendous amount of resources in Trust and Safety, especially for a startup,” a Character.AI spokesperson told Decrypt at the time, adding that, “In the past year, we’ve rolled out many substantive safety features, including an entirely new under-18 experience and a Parental Insights feature.”

“The shift is both legally prudent and ethically responsible,” Ishita Sharma, managing partner at Fathom Legal, told Decrypt. “AI tools are immensely powerful, but with minors, the risks of emotional and psychological harm are nontrivial.”

“Until then, proactive industry action may be the most effective defense against both harm and litigation,” Sharma added.

A bipartisan group of U.S. senators introduced legislation Tuesday called the GUARD Act that would ban AI companions for minors, require chatbots to clearly identify themselves as non-human, and create new criminal penalties for companies whose products aimed at minors solicit or generate sexual content.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.

Source: https://decrypt.co/346770/character-ai-halts-teen-chats-after-tragedies-its-the-right-thing-to-do

Market Opportunity
null Logo
null Price(null)
--
----
USD
null (null) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Trump to allow Chinese TikTok algorithm after Larry Ellison-led takeover

Trump to allow Chinese TikTok algorithm after Larry Ellison-led takeover

The post Trump to allow Chinese TikTok algorithm after Larry Ellison-led takeover appeared on BitcoinEthereumNews.com. Donald Trump just approved a deal that lets the Chinese-built TikTok algorithm keep running in the U.S., even after all that noise about national security. The same system lawmakers said was too risky is staying. But this isn’t a done deal yet, technically. What Trump signed was an executive order on Thursday that delays a ban for 120 days. That ban was originally triggered by a law passed in spring 2024. So instead of pulling the plug, Trump bought time for Larry Ellison, Silver Lake, and others to finalize a structure that keeps the app alive, keeps the Chinese code running, and gives American investors a chance to make money off it. Joint venture to run U.S. TikTok while algorithm stays Chinese The plan is to carve out a separate “American TikTok,” run by a new joint venture controlled by U.S. people and U.S. firms. That version will no longer be under the thumb of ByteDance, but it will still run on ByteDance’s algorithm. This is the same recommendation system that American officials have spent years warning about. Instead of writing new code, the U.S. will just retrain and monitor the existing algorithm. The White House published a fact sheet saying, “the divestiture puts the operation of the algorithm, code, and content moderation decisions under the control of the new joint venture.” They added that all recommendation models using American user data will be retrained and overseen by “trusted security partners.” What the sheet does not say is that a new algorithm will be built from scratch. So the plan is to slap a U.S. security layer on top of a Chinese algorithm, call it American, and hope it works. There’s no clear answer yet on how deep this oversight goes. Will Larry and crew be able to fully audit…
Share
BitcoinEthereumNews2025/09/27 21:46
Zijn stablecoins de toekomst van het geld?

Zijn stablecoins de toekomst van het geld?

Terwijl de Verenigde Staten onder Trump steeds meer inzetten op private stablecoins om de macht van de dollar te vergroten, versnellen Europa en China de ontwikkeling
Share
Coinstats2026/01/17 16:46
Strategic $3M Binance Move Sparks Intense Market Scrutiny

Strategic $3M Binance Move Sparks Intense Market Scrutiny

The post Strategic $3M Binance Move Sparks Intense Market Scrutiny appeared on BitcoinEthereumNews.com. Solayer LAYER Deposit: Strategic $3M Binance Move Sparks
Share
BitcoinEthereumNews2026/01/17 17:14