Military Used AI Chatbot in Maduro Capture Effort
Written by Black Hot Fire Network Team on February 14, 2026
The US military recently utilized Anthropic’s AI chatbot, Claude, during an operation targeting former Venezuelan President Nicolás Maduro. This deployment occurred shortly before a disagreement arose between Anthropic and the Pentagon regarding the appropriate use of artificial intelligence in combat situations.
Operation Targeting Nicolás Maduro
The US military employed Anthropic’s Claude AI chatbot as part of an operation last month that involved bombing several sites in Caracas. Claude was accessed through Anthropic’s partnership with Palantir Technologies, a company whose tools are already integrated into Pentagon and federal law enforcement operations. This use of the chatbot raises concerns about whether it adhered to Anthropic’s own usage policies, which explicitly prohibit facilitating violence, weapons development, and surveillance. Anthropic declined to confirm Claude’s involvement in the specific operation, stating that any use must comply with their Usage Policies. The Defense Department has not commented on the matter.
Dispute Over $200 Million Contract
A significant contract, potentially worth up to $200 million, between Anthropic and the Pentagon is now in jeopardy due to differing views on AI deployment. Anthropic advocates for safeguards preventing Claude from being used for autonomous weapons targeting and domestic surveillance. However, the Pentagon, as outlined in a January 9 memo, believes it should have the freedom to deploy commercial AI tools as it deems necessary, provided US laws are not violated.
Anthropic CEO’s Stance on Military AI Use
Anthropic CEO Dario Amodei recently articulated a clear position on the military application of AI, stating it should support national defense “in all ways except those which would make us more like our autocratic adversaries.” He specifically identified autonomous weapons and mass surveillance as unacceptable practices for democratic nations. Defense Secretary Pete Hegseth, conversely, has indicated the Pentagon’s intention to utilize AI models that facilitate warfare.
Pentagon’s Push for Fewer Restrictions on Classified Networks
The Pentagon is actively encouraging AI companies, including Anthropic, OpenAI, and Google, to deploy their models on classified military networks with reduced safety restrictions typically applied to civilian users. Currently, Anthropic is the only AI developer accessible in classified settings, although it remains subject to its own usage policies. OpenAI has already agreed to relax some of its standard guardrails for Pentagon use on an unclassified network accessible to over 3 million Defense Department employees.