Anthropic is now tagged as a Supply-Chain Risk to National Security by the Department of War, according to U.S. Defense Secretary Pete Hegseth, who posted a longAnthropic is now tagged as a Supply-Chain Risk to National Security by the Department of War, according to U.S. Defense Secretary Pete Hegseth, who posted a long

US Pentagon chief orders Anthropic retaliation designation and lays out the ban

2026/02/28 13:20
4분 읽기

Anthropic is now tagged as a Supply-Chain Risk to National Security by the Department of War, according to U.S. Defense Secretary Pete Hegseth, who posted a long statement on X targeting the AI company.

Pete said his department is permanently breaking up with Anthropic, adhering to President Donald Trump’s public demanda that all federal government agencies stop using Anthropic’s tech “immediately.”

As Cryptopolitan previously reported, Anthropic wanted two limits on how its AI gets used, saying no fully autonomous weapons and no mass domestic surveillance of Americans.

US Pentagon chief orders Anthropic retaliation designation and lays out the ban

Pete wrote in his X post that the Department of War simply had “have full, unrestricted access” to Anthropic models for “every LAWFUL purpose.”

He also attacked Dario Amodei, Anthropic’s CEO, and said the company used “effective altruism” language while trying to force the military’s hand.

Pete then said that the company’s “true objective” was “to seize veto power over the operational decisions of the United States military.”

The US defense chief then wrote that Anthropic is “fundamentally incompatible with American principles,” and said its relationship with the U.S. Armed Forces and the federal government had been “permanently altered.”

Pete wrote that:-

Pete also added a transition window, saying that Anthropic will keep providing services to the Department of War “for a period of no more than six months” so the Pentagon can switch to something else. He ended with, “This decision is final.”

The deadline passes after the $200 million deal

Anthropic had signed a $200 million contract with the Pentagon in July. After that deal, Anthropic wanted written assurances that its models would not be used in fully autonomous weapons or mass domestic surveillance of Americans.

The notes say the Pentagon “strongly resisted” that request. Then the Pentagon set a deadline: 5:01 p.m. ET Friday. The demand was that Anthropic agree that the U.S. military can use the tech for “all lawful purposes.” Obviously, that deadline passed without an agreement.

The Pentagon’s contractor web includes every kind of compny, including every operating system vendor, every hardware maker, every hyperscaler, and every supplier in the chain.

The Trump administration’s actions is a twisted power grab over its inability to commit war crimes and stalk its own citizens.

Anthropic responds to Pentagon, cites 10 USC 3252, and talks court

Anthropic responded with its own statement. The company said it had not received direct communication from the Department of War or the White House on the status of negotiations. It said, “We have tried in good faith to reach an agreement,” and said it supports lawful uses for national security.

On the label itself, Anthropic called the designation “unprecedented,” and said it is usually reserved for U.S. adversaries and has never been publicly applied to an American company. It said, “We are deeply saddened by these developments.”

Anthropic also pointed to its past work with the military. It said it was the first frontier AI company to deploy models in U.S. government classified networks, that it has supported American warfighters since June 2024, and that it intends to keep doing so.

The company then said the designation would be “legally unsound” and would set a “dangerous precedent” for any American company that negotiates with the government. It said:

Anthropic then said Pete implied the label would stop anyone who does business with the military from doing business with Anthropic, and it said Pete “does not have the statutory authority” to back that up.

It cited 10 USC 3252 and said a supply chain risk designation can only extend to the use of Claude as part of Department of War contracts, but cannot control how contractors use Claude for other customers.

The company has promised that individual customers and commercial contract customers are unaffected, including access to Claude through the API, claude.ai, and other products. It said Department of War contractors would only be restricted on Department of War contract work, if the designation is formally adopted, and use for any other purpose would be unaffected.

Meanwhile, Big Tech companies Nvidia, Amazon, and Google would likely have to divest from Anthropic if Pete gets his way, which would also make it nearly impossible to recommend investing in American AI to any investor, or starting an AI company in the United States. This is essentially a lose-lose.

Join a premium crypto trading community free for 30 days - normally $100/mo.

시장 기회
Comedian 로고
Comedian 가격(BAN)
$0.1107
$0.1107$0.1107
-1.74%
USD
Comedian (BAN) 실시간 가격 차트
면책 조항: 본 사이트에 재게시된 글들은 공개 플랫폼에서 가져온 것으로 정보 제공 목적으로만 제공됩니다. 이는 반드시 MEXC의 견해를 반영하는 것은 아닙니다. 모든 권리는 원저자에게 있습니다. 제3자의 권리를 침해하는 콘텐츠가 있다고 판단될 경우, crypto.news@mexc.com으로 연락하여 삭제 요청을 해주시기 바랍니다. MEXC는 콘텐츠의 정확성, 완전성 또는 시의적절성에 대해 어떠한 보증도 하지 않으며, 제공된 정보에 기반하여 취해진 어떠한 조치에 대해서도 책임을 지지 않습니다. 본 콘텐츠는 금융, 법률 또는 기타 전문적인 조언을 구성하지 않으며, MEXC의 추천이나 보증으로 간주되어서는 안 됩니다.