OpenAI has reached an agreement with the United States Department of War (formerly Defense) to deploy its artificial intelligence (AI) models on classified military networks, hours after the White House and Pentagon ordered federal agencies and the military to stop using technology from fellow AI powerhouse Anthropic.
On February 28, OpenAI CEO Sam Altman announced the deal in a post on X, saying the company would provide its models inside the Pentagon’s “classified network.” Contrary to concerns voiced by Anthropic’s CEO Dario Amodei the previous day, Altman wrote that the Department of War (DoW) showed “deep respect for safety” and agreed to certain “technical safeguards” to ensure the AI models behave as they should.
“We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements,” said Altman, in an allusion to the measures taken against Anthropic earlier in the day—namely, being designated a “supply-chain risk” to national security and banning federal agencies from using the company’s products.
The specific reason the U.S. government and military ditched Anthropic was the company’s refusal to allow its AI to be used for mass surveillance or for autonomous weapons. In Altman’s statement announcing OpenAI’s partnership with the DoW, he appeared to suggest that he had the same red lines and had somehow gotten the military to agree with them.
“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” said Altman. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”
This seeming contradiction in how the DoW responded when Anthropic insisted on similar safeguards was clarified on Saturday by the Under Secretary of State for Foreign Assistance, Humanitarian Affairs and Religious Freedom, Jeremy Lewin, who wrote on X that OpenAI had agreed to “all lawful use” based on “existing legal authorities.”
The distinction, according to Lewin, is that OpenAI agreed to allow the use of its AI models for any purposes, relying on existing U.S. laws and safeguards around mass surveillance and autonomous weapons to prevent their misuse. In contrast, Anthropic drew its own red lines, saying it would not allow its products to be used under any circumstances for mass surveillance or autonomous weapons, whether permitted by law or not.
According to Lewin, OpenAI’s approach “references laws and thus appropriately vests those questions in our democratic system,” while Anthropic’s approach “unacceptably vests those questions in a single unaccountable CEO who would usurp sovereign control of our most sensitive systems.”
This appears to be why the former AI giant has gained the enviable, lucrative position of AI supplier to the U.S. military-industrial complex, while the latter AI giant has lost its contract.
Back to the top ↑
The Anthropic saga
Anthropic had a reported $200 million contract with the U.S. military to use its technology within the Pentagon’s classified networks, with Claude extensively deployed across the Department of Defense (DoD) and other national security agencies for applications such as intelligence analysis, modeling and simulation, operational planning, and cyber operations.
However, cracks in this relationship began to form earlier this year, when a January 9 memorandum from Secretary of War Hegseth stated that the U.S. will only contract with AI companies that accede to “any lawful use” and remove safeguards against use in mass surveillance and autonomous weapons. He also set a deadline of the end of February for contracted firms to fall in line.
It began to become clear, later in January, how Anthropic may respond to these demands, when suggestions arose that the company was unhappy with the use of its AI model Claude in the abduction by U.S. military forces of Venezuelan President Nicolás Maduro.
According to a February 22 Washington Post report, Anthropic asked how its model was used in the operation, which, in turn, led the DoW to doubt whether it could rely on the company.
Later, on February 26—a day before Hegseth’s deadline—Anthropic CEO Dario Amodei released a statement saying that the company “cannot in good conscience” comply with the DoW demand to remove safety precautions from its AI model.
Thus, on the morning of February 27, Hegseth followed through on his threat and labeled nation usually reserved for U.S. adversaries that requires defense contractors to certify they are not using the company’s models.
The Secretary of War also called Amodei’s statement a “master class in arrogance and betrayal” and said that Anthropic’s true objective was “to seize veto power over the operational decisions of the United States military.”
“Anthropic’s stance is fundamentally incompatible with American principles,” he continued. “Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered.”
Hegseth ordered that, effective immediately, no contractor, supplier, or partner that does business with the U.S. military may conduct any commercial activity with Anthropic.
He did, however, add that Anthropic will continue to provide the DoW its services “for a period of no more than six months” to allow for a transition “to a better and more patriotic service.”
At the same time, President Trump directed all U.S. federal agencies to immediately halt use of Anthropic technology, also adding a six-month transition period for agencies already relying on its systems.
The inclusion of a transition period likely explains why the U.S. military reportedly used Anthropic during a major air strike on Iran, only hours after being ordered to halt use of the company’s systems.
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.
Back to the top ↑
Watch: The Problem with AI and Misinformation
Source: https://coingeek.com/openai-signs-us-defense-contract-after-anthropic-drops-out/


