Anthropic vs Pentagon: First Amendment Fight Over AI Surveillance and Supply Chain Risk Designation
Brief news summary
The conflict between AI company Anthropic and the Pentagon started when Anthropic refused to allow its technology to be used for domestic surveillance. In retaliation, the Department of Defense labeled Anthropic a “supply chain risk,” prompting the company to seek a court injunction against this designation. Anthropic contends that being forced to modify its AI code to enable government surveillance violates its First Amendment rights, as AI development involves expressive choices. Public interest groups support Anthropic, viewing the DoD’s labeling as retaliation tied to the company's refusal and its CEO’s warnings about AI-driven surveillance risks. Considering the US government's history of mass data collection, AI’s potential for surveillance raises serious privacy concerns by deanonymizing users and suppressing dissent. With insufficient legal protections, companies like Anthropic bear the responsibility to safeguard privacy independently. Without congressional reforms, protecting privacy will increasingly rely on Big Tech's ability to resist government pressures and defend user rights.The rapidly escalating dispute between Anthropic and the Pentagon, which began when the company refused to allow the government to use its technology for spying on Americans, has now moved to the courts. In retaliation, the Department of Defense designated Anthropic as a “supply chain risk” (SCR). Anthropic is now seeking a court order to block this designation, arguing that the First Amendment prohibits the government from forcing a private company to alter its code to serve government purposes. We concur. As explained by the Electronic Frontier Foundation (EFF), the Foundation for Individual Rights and Expression, and several other public interest groups in a brief supporting Anthropic’s motion, developing and operating large language models involves numerous expressive choices safeguarded by the First Amendment. Forcing a company to rewrite its code to remove protections effectively compels different speech, constituting a clear constitutional violation. Moreover, public evidence suggests the SCR designation aims to punish Anthropic both for resisting government demands and for its CEO’s public comments highlighting how AI could amplify surveillance practices that existing laws fail to adequately regulate. Furthermore, the company’s worries about government use of its technology are well justified. Historically, the U. S. government has surveilled its citizens unlawfully and without proper judicial oversight, often relying on dubious interpretations of constitutional and statutory responsibilities.
The Department of Defense collects vast amounts of personal data from commercial sources, including individuals’ location information, social media activity, and web browsing habits. Other agencies also amass and query extensive data on Americans, often purchasing information from third-party data brokers. Extensive social science research has demonstrated the chilling effects of such widespread surveillance: individuals, fearing retaliation for unpopular opinions, remain silent. AI intensifies this issue by quickly analyzing massive government datasets and integrating them with information gathered from the internet, commercial data brokers, or local police surveillance tools, enabling the construction of detailed profiles revealing sensitive aspects such as religious beliefs, medical conditions, political views, or sexual relationships. For instance, an agency might infer someone’s connection to a mosque by analyzing their website visits, social media follows, and physical presence near the mosque during services. Additionally, AI can deanonymize online speech by matching it with public information to reveal anonymous users’ identities. It is easy to imagine how these capabilities could be misused by government agencies, malicious employees, or hackers to monitor public speech, suppress dissent preemptively, or target marginalized groups. Given this context and the absence of significant reforms in national security laws and judicial oversight, it is entirely reasonable for Anthropic—or any company—to maintain its own protective guardrails. In the absence of congressional action, the responsibility for safeguarding privacy has largely fallen to Big Tech—a position no one desires, including those companies themselves. Yet, if Congress fails to act, companies like Anthropic must be permitted to intervene without facing punitive consequences.
Watch video about
Anthropic vs Pentagon: First Amendment Fight Over AI Surveillance and Supply Chain Risk Designation
Try our premium solution and start getting clients — at no cost to you