
The Sound of FuryFeb 23
inside anduril's race to build america's first autonomous fighter jet — and the pentagon's bet that robotic airpower will redefine warfare in the decades to come
Feb 24, 2026

Yesterday, Axios reported that the Department of War plans to meet with Anthropic, makers of AI model Claude, sometime Tuesday, to hash out some disagreements that are a few months in the making.
The disagreements originated roughly after The Wall Street Journal reported, earlier this month, that Claude was used in the capture of Venezuelan President Nicolás Maduro, stirring up a debate in the world of AI policy.
The raid was carried out using Palantir, which began a partnership with Anthropic in 2024. Palantir uses frontier large language models like Claude in order to help the military navigate the company’s data analytics product and make certain tasks, like data-mining, easier.
So, allegedly, Claude was used via Palantir’s tech in one of the Pentagon’s most successful operations in modern history. Fucking sick. What’s the problem?
According to the reporting so far, Anthropic is concerned that its model was used outside the scope of its terms of service, which prohibit Claude from use in violence, developing weapons, and most other things that you’d be doing at the Department of Literal War.
“We cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise,” an Anthropic spokesman told the Journal on February 13th. “Any use of Claude — whether in the private sector or across government — is required to comply with our Usage Policies, which govern how Claude can be deployed. We work closely with our partners to ensure compliance.”
Following the raid, per the Journal, someone at Anthropic reached out to a “counterpart” at Palantir, wanting to know specifically how Claude was used in the operation.
And that inquiry didn’t sit well with the Pentagon.