What happened
In July 2025, the Pentagon's Chief Digital and Artificial Intelligence Office awarded multi-vendor prototype agreements to companies including Anthropic, OpenAI, Google, and xAI for AI services on defense networks.
Anthropic's agreement included restrictions it had negotiated since signing: no use for mass surveillance of people in the United States and no use for fully autonomous weapons systems. In January 2026, the Department of Defense ordered Anthropic to remove those restrictions and provide unrestricted use. Anthropic refused.
On February 27, President Trump ordered all federal agencies to cease use of Anthropic's technology, with a six-month phase-out. Defense Secretary Hegseth simultaneously designated Anthropic a "supply-chain risk to national security" — a label historically reserved for foreign adversary technology.
Shortly after the designation, OpenAI announced a contract to deploy its models on classified DoD networks — the same networks where Anthropic's models had previously established a significant foothold.
The contract language matters
OpenAI's published contract included three stated restrictions: no mass domestic surveillance, no autonomous weapons direction, and no high-stakes automated decisions like social credit systems. It also accepted the DoD's "all lawful purposes" standard — the same standard Anthropic had rejected.
The distinction, as Lawfare's analysis noted, is about interpretive authority. Under "all lawful purposes," the customer defines what is lawful. Under Anthropic's approach, the vendor retained veto power over specific use categories.
Anthropic CEO Dario Amodei, in a leaked staff memo, was specific about the gap: the Pentagon asked Anthropic to delete "a specific phrase about 'analysis of bulk acquired data'" — the single contractual line that addressed the surveillance scenario Anthropic considered most important to restrict.
What this reveals about policy durability
This episode exposed a structural reality that affects every organization evaluating AI vendors for sensitive work:
Vendor policy isn't durable governance. A vendor's published acceptable-use policy can change under procurement pressure, executive order, or commercial incentive. Anthropic held its position and was designated a supply-chain risk. OpenAI revised its contract terms after public backlash — adding language about "commercially acquired personal or identifiable information" that wasn't in the original deal. Both policy positions shifted in response to external pressure within weeks.
Contract language has become governance by negotiation. The Center for American Progress observed that the U.S. has moved toward AI governance by procurement contract — bilateral agreements between vendors and agencies, without statutory backing, democratic accountability, or institutional durability. The rules governing military AI use are negotiated in procurement offices, not legislated.
The market responded faster than the policy process. Hundreds of OpenAI and Google employees signed an open letter supporting Anthropic's position. A user boycott helped push Claude to the top of the App Store. OpenAI revised its contract language. The policy outcome was shaped by market dynamics and public pressure as much as by any formal governance process.
Implications for organizations in sensitive markets
For defense contractors and government-adjacent teams evaluating AI vendors, three implications stand out:
Vendor policy is a snapshot. Evaluate vendors on architectural properties — what data paths exist, what telemetry is collected, what the system can and can't do by design — not on policy language that may change. This is the core argument for understanding how architecture defines surveillance boundaries independently of vendor intent.
Supply-chain risk designations are a real business risk. Government contractors using Anthropic products face a six-month compliance deadline to remove them. Any team that built workflows around a single vendor's AI capability now faces forced migration under time pressure.
Multi-vendor dependency is itself a risk. The rapid substitution — Anthropic out, OpenAI in, on classified networks — happened in a single news cycle. Treating AI vendors as interchangeable underestimates the effort of switching. Treating any single vendor as permanent risks getting caught in the next policy shift.
The structural question raised by both the Center for American Progress and Lawfare is whether procurement contracts are the right mechanism for governing military AI use at all. Until Congress acts, the answer defaults to the current system: bilateral negotiation between vendors and agencies, subject to revision under pressure.