Pentagon Orders Removal of Anthropic AI Due to National Security Concerns
Brief news summary
The U.S. Department of Defense has ordered the removal of Anthropic’s AI products from all military branches within 180 days, citing unacceptable supply chain risks. This is the first time a U.S. company faces such restrictions, previously applied only to foreign firms like Huawei. The ban affects critical national security areas including nuclear weapons, missile defense, and cyber warfare systems, requiring government contractors to cease using Anthropic’s tools on DoD contracts. The decision follows failed negotiations over restrictions on Anthropic’s Claude AI model, which the Pentagon rejected, asserting current laws suffice. Claude, uniquely deployed on classified Pentagon systems, supports intelligence analysis and targeting in conflict zones such as Iran. In response, Anthropic has filed lawsuits claiming illegal retaliation and free speech violations. The White House backs the Pentagon’s move, emphasizing national security. Meanwhile, after talks with Anthropic broke down, OpenAI secured a Pentagon contract, highlighting tensions over ethical issues and control of advanced AI technologies.The Defense Department has formally instructed senior U. S. military leaders to remove Anthropic's AI products from their systems within 180 days, as stated in an internal March 6 memorandum obtained by CBS News. The directive, issued the day after the Pentagon labeled Anthropic a supply chain risk, warns that Anthropic's AI poses an unacceptable threat to all Department of Defense systems and networks. Signed by Defense CIO Kirsten Davies, the memo highlights the extensive measures commanders must take to eliminate Anthropic AI from critical national security systems, including nuclear weapons, missile defense, and cyber warfare. It also mandates that any contractor working with the Pentagon discontinue use of Anthropic products on Defense-related projects within 180 days. Davies cautioned that adversaries could exploit vulnerabilities in Pentagon operations, potentially leading to catastrophic risks for warfighters. She emphasized that only she can approve exemptions, which will be granted solely for mission-critical national security operations lacking viable alternatives, contingent upon submitted risk mitigation plans. A senior Pentagon official verified the memo's authenticity, while Anthropic had no immediate comment. This unprecedented federal government action marks the first time an American company has been designated a supply chain risk; previous restrictions targeted foreign firms like Huawei. The move follows a stalemate over Anthropic's request for two "red lines" barring the U. S.
military from using its Claude AI for mass surveillance on Americans or fully autonomous weapons—measures Anthropic CEO Dario Amodei said reflect American values. The Pentagon, however, insisted on use of Claude for all lawful purposes without limitations, contending that prohibited uses are already banned. According to informed sources, Claude supports U. S. military efforts in the war on Iran. Anthropic remains the sole AI company with models deployed on the Pentagon’s classified systems. After negotiations failed last month, OpenAI, Anthropic’s major rival, announced a Pentagon contract. On the same day as the memo’s distribution, Anthropic filed two lawsuits against the federal government, alleging the supply chain risk designation is unlawful retaliation against protected speech, noting no statute permits such actions. White House spokesperson Liz Huston condemned the suit, stating President Trump “will never allow a radical left, woke company to jeopardize our national security by dictating how the greatest and most powerful military in the world operates. ” According to a source familiar with Claude’s military role, its principal function is rapidly analyzing intelligence reports by identifying patterns, summarizing findings, and highlighting relevant data more efficiently than human analysts. Retired Navy Admiral Mark Montgomery, now with the Foundation for Defense of Democracies, noted the military processes about a thousand potential targets daily and strikes most within a four-hour turnaround, with AI enabling this scale and speed while keeping humans involved in decision-making—a pace unmatched in prior campaigns.
Watch video about
Pentagon Orders Removal of Anthropic AI Due to National Security Concerns
Try our premium solution and start getting clients — at no cost to you