U.S. Government Orders Phase-Out of Anthropic AI Tools Following Pentagon Dispute
The United States government has reportedly begun phasing out the use of AI products developed by Anthropic after a public disagreement between the company and the Department of Defense.
According to statements posted on Truth Social, President Donald Trump instructed federal agencies to discontinue use of Anthropic’s technology, granting departments a six-month transition period. The directive indicated that the company would no longer be considered for federal contracting.
Although the initial announcement did not reference national security classification measures, a later statement from Secretary of Defense Pete Hegseth reportedly designated Anthropic as a “supply chain risk to national security.” The order stated that contractors and suppliers working with the U.S. military would be prohibited from engaging in commercial activity with the company.
Background of the Dispute
The disagreement reportedly stemmed from Anthropic’s refusal to allow its artificial intelligence models to be used for:
- Mass domestic surveillance
- Fully autonomous offensive weapons
Anthropic CEO Dario Amodei publicly reaffirmed the company’s position, stating that while the company preferred to continue supporting defense-related work, it would not compromise on its safeguards regarding surveillance and autonomous weapons use.
He added that if the Department of Defense chose to transition away from Anthropic systems, the company would cooperate to ensure continuity of military operations without disruption.
OpenAI and Industry Reaction
Reports from the BBC indicate that OpenAI CEO Sam Altman communicated internally that OpenAI shares similar ethical boundaries. According to those reports, OpenAI defense contracts would also reject uses considered unlawful or unsuitable, including domestic surveillance and autonomous offensive weapons systems.
OpenAI co-founder Ilya Sutskever, who previously departed the company and launched his own AI venture, publicly expressed support for Anthropic’s stance.
However, shortly after the U.S. government’s directive, OpenAI announced a separate agreement with the Pentagon. According to The New York Times, discussions between OpenAI and U.S. defense officials had begun earlier in the week.
Broader Context
Anthropic, OpenAI, and Google were among the AI companies that received U.S. Department of Defense contracts in July of last year. While some Google employees have reportedly supported Anthropic’s position, Google and its parent company have not issued official public statements on the matter.
Why This Matters
The situation highlights growing tensions between:
- National security priorities
- AI ethics and governance
- Corporate responsibility in defense partnerships
As AI systems become increasingly integrated into military and governmental infrastructure, questions around surveillance, autonomous weapons, and ethical deployment continue to shape policy and industry direction.