The post Anthropic, OpenAI Dial Back Safety Language as AI Race Accelerates appeared on BitcoinEthereumNews.com. In brief TIME reports Anthropic dropped a pledgeThe post Anthropic, OpenAI Dial Back Safety Language as AI Race Accelerates appeared on BitcoinEthereumNews.com. In brief TIME reports Anthropic dropped a pledge

Anthropic, OpenAI Dial Back Safety Language as AI Race Accelerates

In brief

  • TIME reports Anthropic dropped a pledge to halt training without guaranteed safeguards.
  • OpenAI also removed “safely” from its mission after restructuring into a for-profit entity.
  • Experts say the shift reflects political, economic, and intellectual changes.

Anthropic has dropped a central safety pledge from its Responsible Scaling Policy, according to a report by TIME. The changes loosen a commitment that once barred the Claude AI developer from training advanced AI systems without guaranteed safeguards in place.

The move reshapes how the company positions itself in the AI race against rivals OpenAI, Google, and xAI. Anthropic has long cast itself as one of the industry’s most safety-focused labs, but under the revised policy, Anthropic no longer promises to halt training if risk mitigations are not fully in place.

“We felt that it wouldn’t actually help anyone for us to stop training AI models,” Anthropic’s chief science officer, Jared Kaplan, told TIME. “We didn’t really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead.”

The change comes as Anthropic finds itself embroiled in a public dispute with U.S. Defense Secretary Pete Hegseth over refusing to grant the Pentagon full access to Claude, making it the only major AI lab among Google, xAI, Meta, and OpenAI to take that stance.

Edward Geist, a senior policy researcher at the RAND Corporation, said the earlier “AI safety” framing emerged from a specific intellectual community that predated today’s large language models.

“As of a few years ago, there was the field of AI safety,” Geist told Decrypt. “AI safety was associated with a particular set of views that came out of the community of people who cared about powerful AI before we had these LLMs.”

Geist said early AI safety advocates were working from a very different vision of what advanced artificial intelligence would look like.

“They ended up conceptualizing the problem in a way that, in some respects, was envisioning something qualitatively different from these current LLMs, for better or worse,” Geist said.

Geist said the language change also sends a signal to investors and policymakers.

“Part of it is signaling to various constituencies that a lot of these companies want to give the impression that they are not holding back in the economic competition because of concerns about ‘AI safety,’” he said, adding that the terminology itself is changing to fit the times.

Anthropic is not alone in revising its safety language.

What defines AI safety?

A recent report by the non-profit news organization, The Conversation, noted how OpenAI also changed its mission statement in its 2024 IRS filing, removing the word “safely.”

The company’s earlier statement pledged to build general-purpose AI that “safely benefits humanity, unconstrained by a need to generate financial return.” The updated version now states its goal is “to ensure that artificial general intelligence benefits all of humanity.”

“The problem with the term AI security is that no one seems to know what that means exactly,” Geist said. “Then again, the AI safety term was also contested.”

Anthropic’s new policy emphasizes transparency measures such as publishing “frontier safety roadmaps” and regular “risk reports,” and says it will delay development if it believes there is a significant risk of catastrophe.

Anthropic and OpenAI’s policy shifts come as the companies look to strengthen their commercial position.

Earlier this month, Anthropic said it raised $30 billion at a valuation of about $380 billion. At the same time, OpenAI is finalizing a funding round backed by Amazon, Microsoft, and Nvidia that could reach $100 billion.

Anthropic and OpenAI, along with Google and xAI, have been awarded lucrative government contracts with the U.S. Department of Defense. For Anthropic, however, the contract appears in doubt as the Pentagon weighs whether to cut ties to the AI firm over access complaints.

As capital pours into the sector and geopolitical competition intensifies, Hamza Chaudhry, AI and National Security Lead at the Future of Life Institute, said the policy change reflects shifting political dynamics rather than a bid for Pentagon business.

“If that were the case, they would have just backed down from what the Pentagon said a week ago,” Chaudhry told Decrypt. “Dario [Amodei] wouldn’t have shown up to meet.”

Instead, Chaudhry said the rewrite reflects a turning point in how AI companies talk about risk as political pressure and competitive stakes rise.

“Anthropic is now saying, ‘Look, we can’t keep saying safety, we can’t unconditionally pause, and we’re going to push for much lighter-touch regulation,’” he said.

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.

Source: https://decrypt.co/359179/anthropic-openai-dial-back-safety-language-ai-race

Market Opportunity
Xai Logo
Xai Price(XAI)
$0.009704
$0.009704$0.009704
-0.35%
USD
Xai (XAI) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags:

You May Also Like

The Channel Factories We’ve Been Waiting For

The Channel Factories We’ve Been Waiting For

The post The Channel Factories We’ve Been Waiting For appeared on BitcoinEthereumNews.com. Visions of future technology are often prescient about the broad strokes while flubbing the details. The tablets in “2001: A Space Odyssey” do indeed look like iPads, but you never see the astronauts paying for subscriptions or wasting hours on Candy Crush.  Channel factories are one vision that arose early in the history of the Lightning Network to address some challenges that Lightning has faced from the beginning. Despite having grown to become Bitcoin’s most successful layer-2 scaling solution, with instant and low-fee payments, Lightning’s scale is limited by its reliance on payment channels. Although Lightning shifts most transactions off-chain, each payment channel still requires an on-chain transaction to open and (usually) another to close. As adoption grows, pressure on the blockchain grows with it. The need for a more scalable approach to managing channels is clear. Channel factories were supposed to meet this need, but where are they? In 2025, subnetworks are emerging that revive the impetus of channel factories with some new details that vastly increase their potential. They are natively interoperable with Lightning and achieve greater scale by allowing a group of participants to open a shared multisig UTXO and create multiple bilateral channels, which reduces the number of on-chain transactions and improves capital efficiency. Achieving greater scale by reducing complexity, Ark and Spark perform the same function as traditional channel factories with new designs and additional capabilities based on shared UTXOs.  Channel Factories 101 Channel factories have been around since the inception of Lightning. A factory is a multiparty contract where multiple users (not just two, as in a Dryja-Poon channel) cooperatively lock funds in a single multisig UTXO. They can open, close and update channels off-chain without updating the blockchain for each operation. Only when participants leave or the factory dissolves is an on-chain transaction…
Share
BitcoinEthereumNews2025/09/18 00:09
Bitcoin, Ethereum, XRP, Dogecoin Surge With Stocks, But Analyst Warns This Might Just Be A 'Relief Rally'

Bitcoin, Ethereum, XRP, Dogecoin Surge With Stocks, But Analyst Warns This Might Just Be A 'Relief Rally'

Leading cryptocurrencies jumped on Wednesday, though analysts view the uptick as a relief bounce rather than a momentum shift.read more
Share
Coinstats2026/02/26 10:04
The Chen Zhi case and the Zhao Changpeng case: The United States profited nearly $20 billion from them.

The Chen Zhi case and the Zhao Changpeng case: The United States profited nearly $20 billion from them.

Author: Yuan Hong , Global Times On February 26, a new report jointly released by the National Computer Virus Emergency Response Center of China and other departments
Share
PANews2026/02/26 11:18