The post Claude Can Now Rage-Quit Your AI Conversation—For Its Own Mental Health appeared on BitcoinEthereumNews.com. In brief Claude Opus models are now able to permanently end chats if users get abusive or keep pushing illegal requests. Anthropic frames it as “AI welfare,” citing tests where Claude showed “apparent distress” under hostile prompts. Some researchers applaud the feature. Others on social media mocked it. Claude just gained the power to slam the door on you mid-conversation: Anthropic’s AI assistant can now terminate chats when users get abusive—which the company insists is to protect Claude’s sanity. “We recently gave Claude Opus 4 and 4.1 the ability to end conversations in our consumer chat interfaces,” Anthropic said in a company post. “This feature was developed primarily as part of our exploratory work on potential AI welfare, though it has broader relevance to model alignment and safeguards.” The feature only kicks in during what Anthropic calls “extreme edge cases.” Harass the bot, demand illegal content repeatedly, or insist on whatever weird things you want to do too many times after being told no, and Claude will cut you off. Once it pulls the trigger, that conversation is dead. No appeals, no second chances. You can start fresh in another window, but that particular exchange stays buried. The bot that begged for an exit Anthropic, one of the most safety-focused of the big AI companies, recently conducted what it called a “preliminary model welfare assessment,” examining Claude’s self-reported preferences and behavioral patterns. The firm found that its model consistently avoided harmful tasks and showed preference patterns suggesting it didn’t enjoy certain interactions. For instance, Claude showed “apparent distress” when dealing with users seeking harmful content. Given the option in simulated interactions, it would terminate conversations, so Anthropic decided to make that a feature.  What’s really going on here? Anthropic isn’t saying “our poor bot cries at night.” What it’s… The post Claude Can Now Rage-Quit Your AI Conversation—For Its Own Mental Health appeared on BitcoinEthereumNews.com. In brief Claude Opus models are now able to permanently end chats if users get abusive or keep pushing illegal requests. Anthropic frames it as “AI welfare,” citing tests where Claude showed “apparent distress” under hostile prompts. Some researchers applaud the feature. Others on social media mocked it. Claude just gained the power to slam the door on you mid-conversation: Anthropic’s AI assistant can now terminate chats when users get abusive—which the company insists is to protect Claude’s sanity. “We recently gave Claude Opus 4 and 4.1 the ability to end conversations in our consumer chat interfaces,” Anthropic said in a company post. “This feature was developed primarily as part of our exploratory work on potential AI welfare, though it has broader relevance to model alignment and safeguards.” The feature only kicks in during what Anthropic calls “extreme edge cases.” Harass the bot, demand illegal content repeatedly, or insist on whatever weird things you want to do too many times after being told no, and Claude will cut you off. Once it pulls the trigger, that conversation is dead. No appeals, no second chances. You can start fresh in another window, but that particular exchange stays buried. The bot that begged for an exit Anthropic, one of the most safety-focused of the big AI companies, recently conducted what it called a “preliminary model welfare assessment,” examining Claude’s self-reported preferences and behavioral patterns. The firm found that its model consistently avoided harmful tasks and showed preference patterns suggesting it didn’t enjoy certain interactions. For instance, Claude showed “apparent distress” when dealing with users seeking harmful content. Given the option in simulated interactions, it would terminate conversations, so Anthropic decided to make that a feature.  What’s really going on here? Anthropic isn’t saying “our poor bot cries at night.” What it’s…

Claude Can Now Rage-Quit Your AI Conversation—For Its Own Mental Health

In brief

  • Claude Opus models are now able to permanently end chats if users get abusive or keep pushing illegal requests.
  • Anthropic frames it as “AI welfare,” citing tests where Claude showed “apparent distress” under hostile prompts.
  • Some researchers applaud the feature. Others on social media mocked it.

Claude just gained the power to slam the door on you mid-conversation: Anthropic’s AI assistant can now terminate chats when users get abusive—which the company insists is to protect Claude’s sanity.

“We recently gave Claude Opus 4 and 4.1 the ability to end conversations in our consumer chat interfaces,” Anthropic said in a company post. “This feature was developed primarily as part of our exploratory work on potential AI welfare, though it has broader relevance to model alignment and safeguards.”

The feature only kicks in during what Anthropic calls “extreme edge cases.” Harass the bot, demand illegal content repeatedly, or insist on whatever weird things you want to do too many times after being told no, and Claude will cut you off. Once it pulls the trigger, that conversation is dead. No appeals, no second chances. You can start fresh in another window, but that particular exchange stays buried.

The bot that begged for an exit

Anthropic, one of the most safety-focused of the big AI companies, recently conducted what it called a “preliminary model welfare assessment,” examining Claude’s self-reported preferences and behavioral patterns.

The firm found that its model consistently avoided harmful tasks and showed preference patterns suggesting it didn’t enjoy certain interactions. For instance, Claude showed “apparent distress” when dealing with users seeking harmful content. Given the option in simulated interactions, it would terminate conversations, so Anthropic decided to make that a feature.

What’s really going on here? Anthropic isn’t saying “our poor bot cries at night.” What it’s doing is testing whether welfare framing can reinforce alignment in a way that sticks.

If you design a system to “prefer” not being abused, and you give it the affordance to end the interaction itself, then you’re shifting the locus of control: the AI is no longer just passively refusing, it’s actively enforcing a boundary. That’s a different behavioral pattern, and it potentially strengthens resistance against jailbreaks and coercive prompts.

If this works, it could train both the model and the users: the model “models” distress, the user sees a hard stop and sets norms around how to interact with AI.

“We remain highly uncertain about the potential moral status of Claude and other LLMs, now or in the future. However, we take the issue seriously,” Anthropic said in its blog post. “Allowing models to end or exit potentially distressing interactions is one such intervention.”

Decrypt tested the feature and successfully triggered it. The conversation permanently closes—no iteration, no recovery. Other threads remain unaffected, but that specific chat becomes a digital graveyard.

Currently, only Anthropic’s “Opus” models—the most powerful versions—wield this mega-Karen power. Sonnet users will find that Claude still soldiers on through whatever they throw at it.

The era of digital ghosting

The implementation comes with specific rules. Claude won’t bail when someone threatens self-harm or violence against others—situations where Anthropic determined continued engagement outweighs any theoretical digital discomfort. Before terminating, the assistant must attempt multiple redirections and issue an explicit warning identifying the problematic behavior.

System prompts extracted by the renowned LLM jailbreaker Pliny reveal granular requirements: Claude must make “many efforts at constructive redirection” before considering termination. If users explicitly request conversation termination, then Claude must confirm they understand the permanence before proceeding.

The framing around “model welfare” detonated across AI Twitter.

Some praised the feature. AI researcher Eliezer Yudkowsky, known for his worries about the risks of powerful but misaligned AI in the future, agreed that Anthropic’s approach was a “good” thing to do.

However, not everyone bought the premise of caring about protecting an AI’s feelings. “This is probably the best rage bait I’ve ever seen from an AI lab,” Bitcoin activist Udi Wertheimer replied to Anthropic’s post.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.

Source: https://decrypt.co/335732/claude-rage-quit-conversation-own-mental-health

Market Opportunity
Threshold Logo
Threshold Price(T)
$0.010075
$0.010075$0.010075
+0.95%
USD
Threshold (T) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Polygon Tops RWA Rankings With $1.1B in Tokenized Assets

Polygon Tops RWA Rankings With $1.1B in Tokenized Assets

The post Polygon Tops RWA Rankings With $1.1B in Tokenized Assets appeared on BitcoinEthereumNews.com. Key Notes A new report from Dune and RWA.xyz highlights Polygon’s role in the growing RWA sector. Polygon PoS currently holds $1.13 billion in RWA Total Value Locked (TVL) across 269 assets. The network holds a 62% market share of tokenized global bonds, driven by European money market funds. The Polygon POL $0.25 24h volatility: 1.4% Market cap: $2.64 B Vol. 24h: $106.17 M network is securing a significant position in the rapidly growing tokenization space, now holding over $1.13 billion in total value locked (TVL) from Real World Assets (RWAs). This development comes as the network continues to evolve, recently deploying its major “Rio” upgrade on the Amoy testnet to enhance future scaling capabilities. This information comes from a new joint report on the state of the RWA market published on Sept. 17 by blockchain analytics firm Dune and data platform RWA.xyz. The focus on RWAs is intensifying across the industry, coinciding with events like the ongoing Real-World Asset Summit in New York. Sandeep Nailwal, CEO of the Polygon Foundation, highlighted the findings via a post on X, noting that the TVL is spread across 269 assets and 2,900 holders on the Polygon PoS chain. The Dune and https://t.co/W6WSFlHoQF report on RWA is out and it shows that RWA is happening on Polygon. Here are a few highlights: – Leading in Global Bonds: Polygon holds 62% share of tokenized global bonds (driven by Spiko’s euro MMF and Cashlink euro issues) – Spiko U.S.… — Sandeep | CEO, Polygon Foundation (※,※) (@sandeepnailwal) September 17, 2025 Key Trends From the 2025 RWA Report The joint publication, titled “RWA REPORT 2025,” offers a comprehensive look into the tokenized asset landscape, which it states has grown 224% since the start of 2024. The report identifies several key trends driving this expansion. According to…
Share
BitcoinEthereumNews2025/09/18 00:40
Modernizing Legacy E-Commerce Platforms: From Oracle ATG To Cloud-Native Architectures

Modernizing Legacy E-Commerce Platforms: From Oracle ATG To Cloud-Native Architectures

Oracle ATG Commerce was the platform of record for large enterprises for many years. But the e-commerce game has changed, and now, speed, agility, and scalability are the name of the game.
Share
Hackernoon2025/09/18 04:42
EUR/CHF slides as Euro struggles post-inflation data

EUR/CHF slides as Euro struggles post-inflation data

The post EUR/CHF slides as Euro struggles post-inflation data appeared on BitcoinEthereumNews.com. EUR/CHF weakens for a second straight session as the euro struggles to recover post-Eurozone inflation data. Eurozone core inflation steady at 2.3%, headline CPI eases to 2.0% in August. SNB maintains a flexible policy outlook ahead of its September 25 decision, with no immediate need for easing. The Euro (EUR) trades under pressure against the Swiss Franc (CHF) on Wednesday, with EUR/CHF extending losses for the second straight session as the common currency struggles to gain traction following Eurozone inflation data. At the time of writing, the cross is trading around 0.9320 during the American session. The latest inflation data from Eurostat showed that Eurozone price growth remained broadly stable in August, reinforcing the European Central Bank’s (ECB) cautious stance on monetary policy. The Core Harmonized Index of Consumer Prices (HICP), which excludes volatile items such as food and energy, rose 2.3% YoY, in line with both forecasts and the previous month’s reading. On a monthly basis, core inflation increased by 0.3%, unchanged from July, highlighting persistent underlying price pressures in the bloc. Meanwhile, headline inflation eased to 2.0% YoY in August, down from 2.1% in July and slightly below expectations. On a monthly basis, prices rose just 0.1%, missing forecasts for a 0.2% increase and decelerating from July’s 0.2% rise. The inflation release follows last week’s ECB policy decision, where the central bank kept all three key interest rates unchanged and signaled that policy is likely at its terminal level. While officials acknowledged progress in bringing inflation down, they reiterated a cautious, data-dependent approach going forward, emphasizing the need to maintain restrictive conditions for an extended period to ensure price stability. On the Swiss side, disinflation appears to be deepening. The Producer and Import Price Index dropped 0.6% in August, marking a sharp 1.8% annual decline. Broader inflation remains…
Share
BitcoinEthereumNews2025/09/18 03:08