US Military Used Anthropic’s Claude AI in Iran Strike Despite Trump’s Ban
Brief news summary
The US military reportedly used Anthropic’s AI model, Claude, in a joint US-Israel mission targeting Iran, despite former President Donald Trump’s directive to sever federal ties with the company. Claude played a key role in intelligence gathering, target selection, and battlefield simulations, according to reports by The Wall Street Journal and Axios. Trump had branded Anthropic as a “Radical Left AI company” and banned federal use of its technology due to ideological differences. The situation escalated when Claude was allegedly employed in a raid to capture Venezuelan President Nicolás Maduro, violating Anthropic’s policies prohibiting the use of its AI for acts of violence or surveillance. This breach strained relations between Trump, the Pentagon, and Anthropic, with Defense Secretary Pete Hegseth condemning the company’s “arrogance and betrayal” and demanding full access to Claude, allowing up to six months for transition due to the model’s complexity. Simultaneously, OpenAI CEO Sam Altman disclosed a Pentagon agreement to integrate AI tools like ChatGPT into classified military networks, highlighting the increasing reliance on AI in defense operations despite ongoing controversies.The US military reportedly utilized Claude, Anthropic’s AI model, to guide its attack on Iran despite Donald Trump’s decision, announced just hours earlier, to cut all ties with the company and its AI tools. The Wall Street Journal and Axios reported that Claude was employed during the extensive joint US-Israel strike on Iran that began on Saturday. This highlights the challenge of the US military withdrawing advanced AI tools from its operations when the technology has become deeply integrated into mission activities. According to the Journal, US military leadership used Claude for intelligence purposes, target selection, and conducting battlefield simulations. On Friday, only hours before the Iran attack started, Trump ordered all federal agencies to immediately cease use of Claude. He condemned Anthropic on Truth Social as a “Radical Left AI company run by people who have no idea what the real World is all about. ” This heated dispute was ignited by the US military’s use of Claude during its January raid aimed at capturing Venezuelan president Nicolás Maduro. Anthropic protested, citing its terms of use prohibiting Claude’s application for violent purposes, weapon development, or surveillance. Since that incident, relations between Trump, the Pentagon, and Anthropic have deteriorated steadily.
In a detailed post on X on Friday, Defense Secretary Pete Hegseth accused Anthropic of “arrogance and betrayal, ” adding that “America’s warfighters will never be held hostage by the ideological whims of Big Tech. ” Hegseth demanded full and unrestricted access to all of Anthropic’s AI models for any lawful uses. However, he also acknowledged the difficulty of rapidly detaching military systems from the AI tool, given its extensive adoption. He noted that Anthropic would continue providing services “for a period of no more than six months to allow for a seamless transition to a better and more patriotic service. ” Following the split with Anthropic, rival company OpenAI has filled the gap. Sam Altman, OpenAI’s CEO, stated he had reached an agreement with the Pentagon to deploy the company’s tools, including ChatGPT, within its classified network.
Watch video about
US Military Used Anthropic’s Claude AI in Iran Strike Despite Trump’s Ban
Try our premium solution and start getting clients — at no cost to you