PANews reported on March 12 that, according to TIME, AI company Anthropic appeared on the cover of Time magazine after refusing to accept "all legitimate uses" clauses during contract negotiations with the Pentagon. Anthropic explicitly opposed using Claude for fully autonomous lethal weapons and mass surveillance of U.S. citizens, subsequently being designated a national security risk in the supply chain by the U.S. Department of Defense and banned from participating in military contracts. Simultaneously, Anthropic revised its "Responsible Expansion Policy" due to a race to develop more powerful models, removing its rigid commitment to suspend training if safety could not be proven, and only committing to "on par with or higher" security investments than its competitors. The company has grown rapidly with products like Claude Code, reaching a valuation of approximately $380 billion, and has been used in mission planning and intelligence analysis in operations such as the capture of former Venezuelan President Maduro.


