US military used Anthropic's AI model Claude in Venezuela raid, report says
https://www.theguardian.com/technology/2026/feb/14/us-military-anthropic-ai-model-claude-venezuela-raid
US military used Anthropics AI model Claude in Venezuela raid, report says
Wall Street Journal says Claude used in operation via Anthropics partnership with Palantir Technologies
William Christou
Sat 14 Feb 2026 11.15 EST
Claude, the AI model developed by Anthropic, was used by the US military during its operation to kidnap Nicolás Maduro from Venezuela, the Wall Street Journal revealed on Saturday, a high-profile example of how the US defence department is using artificial intelligence in its operations.
The US raid on Venezuela involved bombing across the capital, Caracas, and the killing of 83 people, according to Venezuelas defence ministry. Anthropics terms of use prohibit the use of Claude for violent ends, for the development of weapons or for conducting surveillance.
The WSJ cited anonymous sources who said Claude was used through Anthropics partnership with Palantir Technologies, a contractor with the US defence department and federal law enforcement agencies. Palantir refused to comment on the claims.
The US and other militaries increasingly deploy AI as part of their arsenals. Israels military has used drones with autonomous capabilities in Gaza and has extensively used AI to fill its targeting bank in Gaza. The US military has used AI targeting for strikes in Iraq and Syria in recent years.
The Pentagon announced in January that it would work with xAI, owned by Elon Musk. The defence department also uses a custom version of Googles Gemini and OpenAI systems to support research.
...
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

