The US military recently utilized Anthropic’s AI chatbot, Claude, during an operation targeting former Venezuelan President Nicolás Maduro. This deployment occurred shortly before a disagreement arose between Anthropic and the Pentagon regarding the appropriate use of artificial intelligence in combat situations.
The US military employed Anthropic’s Claude AI chatbot as part of an operation last month that involved bombing several sites in Caracas. Claude was accessed through Anthropic’s partnership with Palantir Technologies, a company whose tools are already integrated into Pentagon and federal law enforcement operations. This use of the chatbot raises concerns about whether it adhered to Anthropic’s own usage policies, which explicitly prohibit facilitating violence, weapons development, and surveillance. Anthropic declined to confirm Claude’s involvement in the specific operation, stating that any use must comply with their Usage Policies. The Defense Department has not commented on the matter.
A significant contract, potentially worth up to $200 million, between Anthropic and the Pentagon is now in jeopardy due to differing views on AI deployment. Anthropic advocates for safeguards preventing Claude from being used for autonomous weapons targeting and domestic surveillance. However, the Pentagon, as outlined in a January 9 memo, believes it should have the freedom to deploy commercial AI tools as it deems necessary, provided US laws are not violated.
Anthropic CEO Dario Amodei recently articulated a clear position on the military application of AI, stating it should support national defense “in all ways except those which would make us more like our autocratic adversaries.” He specifically identified autonomous weapons and mass surveillance as unacceptable practices for democratic nations. Defense Secretary Pete Hegseth, conversely, has indicated the Pentagon’s intention to utilize AI models that facilitate warfare.
The Pentagon is actively encouraging AI companies, including Anthropic, OpenAI, and Google, to deploy their models on classified military networks with reduced safety restrictions typically applied to civilian users. Currently, Anthropic is the only AI developer accessible in classified settings, although it remains subject to its own usage policies. OpenAI has already agreed to relax some of its standard guardrails for Pentagon use on an unclassified network accessible to over 3 million Defense Department employees.
On March 12, 2026, the Law Library of Congress and the Supreme Court Fellows Program…
BeReal is actively seeking to engage with US-based influencers as part of a strategy to…
A patient in Nairobi remains hospitalized for weeks with a simple urinary tract infection, a…
Lincoln University will host an exhibition, Lincoln University through the Lens of Griff Davis, opening…
The World Bank/IMF Spring Meetings are underway, and a rapidly growing forum is gaining prominence…
A new cohort of 25 African women leaders has been selected for the She Leads…