
Drone Wars (We’re Losing) Jul 9
how the u.s. fell dangerously behind china on drones — and what that means for the future of warfare
Mar 6, 2026

Emil Michael is the under secretary of war and head of the Pentagon’s artificial intelligence portfolio. We spoke with him yesterday for insight into the negotiations with Anthropic and the Department of War that broke down this week. Here, his account.
In August, Emil Michael, the under secretary of war for research and engineering, took over the Pentagon’s AI portfolio.
To catch up on the department’s AI affairs, he familiarized himself with the contract language of Anthropic and other AI companies. Ultimately, he came to feel that the Biden people who signed the initial Anthropic deals — much like that president, he told us — were “asleep at the wheel.”
“I was like, ‘Holy cow,’” Michael said. “There’s 25 pages of terms and conditions of things I can’t do.”
Among the prohibitions: using Anthropic’s AI to plan kinetic strikes, which are military attacks using physical force like missiles or bombs, generally considered a central activity of war. For example, it’s pretty common for a war-fighter to hypothetically “plan” a kinetic strike. Nonetheless, using Claude to do so would violate Anthropic’s “terms of the service,” he said.
“This is a contract that should be made with GEICO Insurance, not with the Department of War,” he added.
Michael started to renegotiate contracts with all the model providers and “make sure there were none of these crazy terms,” he said.
But the discussions with Anthropic quickly became onerous.
“It was just three months of knockdown, drag-out negotiations,” Michael said.
A lot of it, in his view, was explaining basic work of the Department of War, and running through individual scenarios.
Here’s one.
It’s nighttime on a military base. Everyone’s asleep. A drone swarm descends on our troops — hundreds of these things, with no way for a human to defend against the attack — and a defense system leveraging AI could use a laser to take them out, Michael recalled.
For that scenario, the Anthropic team granted an exception to the terms of service, he said; “They’re like, ‘Oh, interesting.’”
And so began the process of imagining, as best Michael could, every possible future wartime scenario that would require a carveout in Anthropic’s terms of service.
Here’s another.
“What if there’s a missile barrage coming at us at hypersonic speeds, we have 90 seconds to act, and there’s a safe way to take it out and a human can’t do that?” Michael said. “And they’re like, ‘Well, okay, maybe that’s an exception, but just call me every time something happens and you need an exception.’ I was like ‘That’s not quite how it works.’”
According to Semafor, Anthropic has called this characterization “patently false,” but Michael said “20 people in that room could validate that it happened.”
The exchanges were tiring. No matter the hypothetical, no matter the argument, Anthropic insisted cases outside its “terms of service” should come down to Anthropic’s judgment, or “maybe” there could be an exception.
“We kept going down these scenarios and I was like, ‘Guys, I can’t know every exception for a three-million-person department, nor can I predict the future,” Michael said.
Michael wanted clearance for all lawful use of the technology.
Between the Department of War — “We’re the biggest bureaucracy in the world. Literally!” — the Federal Aviation Administration, and other regulatory bodies, there were all kinds of restrictions, policies, safety qualification tests, and so on.
But Anthropic wouldn’t budge.
In an interview with CBS News last week, Anthropic CEO Dario Amodei said that the law has not “caught up” to some of the AI concerns: “And for right now, we are the ones who see this technology on the front line.”
“It turned out that they just couldn’t get there,” Michael said.
Then, it’s Michael’s view that Anthropic began leaking the contract negotiations to the press for the purposes of “recruiting OpenAI’s researchers” and “winning the consumer who’s anti-Trump,” he said.
Partially, he found the Anthropic’s leaks suspicious because — of all the things Anthropic and the Pentagon talked about — they centered on only two issues: “autonomous weapons Skynet scary world, and this mass surveillance stuff.” Issues that happened to sound exceptionally terrifying and made the Trump administration look nefarious, he said.
A Reuters piece from January, one of the earliest, indeed highlighted these issues: per the outlet, sources said “Anthropic representatives raised concerns that its tools could be used to spy on Americans or assist weapons targeting without sufficient human oversight.”
For its part, Anthropic says its technology just isn’t yet reliable enough to power fully autonomous weapons, and that the law hasn’t caught up with AI’s potential for domestic surveillance.
But Michael felt the two issues were “red herrings.”
First, Anthropic’s concern with surveillance, per Amodei, is that publicly available data, purchased by the government, “could all be transcribed, interpreted, and triangulated to create a picture of the attitude and loyalties of many or most citizens.”
“Uhhh, isn’t that what you guys do? You guys buy databases, you guys scrape the internet at mass scale, but I can’t do that?’ So it was sort of nuts,” Michael told us.
The Department of War uses commercially obtained data to fine-tune its AI deployments and build better intelligence profiles (understanding how people generally behave, training on data obtained from American tech companies).
But it’s no fan of illegal domestic surveillance:
“On mass surveillance, we’re not the FBI, we’re not the Department of Homeland Security — that’s not our business,” Michael said.
“So the notion that we would get painted as wanting to do that is crazy,” he added. “We don’t want censorship, we don’t want people’s privacy intruded, we don’t want people’s homes raided with crazy warrants that are invented overnight.”
But Emil certainly does not deny the Pentagon wanted, and still wants, autonomous weapons capability.
He brought up the military’s Golden Dome proposal. It’s meant to be a missile defense shield, partially in outer space. And some of those space assets would shoot down missiles traveling five times the speed of sound — which are hard to hit, normally — from orbit.
It’s impractical to position humans in space in order to see, target, and shoot the missile in the 90 available seconds, Michael said. If it’s going to work, there has to be automation.
“And you might use AI to discriminate. Is this a decoy? Is this a missile head, missile body? Which weapon do I use, based on this trajectory?” he said.
According to Michael, Anthropic was amendable to that point, but the same narrative — that the Pentagon was pushing for these two scary objectives — kept persisting in the media, which reinforced Michael’s belief that “this was an info op,” he said.
Ultimately, frustration built.
“Why would you want to do business with the Department of War if you don’t want it [AI] to do Department of War things?”
The real “trigger point” in their talks was after the capture of Venezuelan leader Nicolás Maduro, Michael said. The raid — which involved kinetic strikes — was carried out with Palantir, which uses Anthropic’s AI.
Sometime after the operation, as first reported by The Wall Street Journal, a senior Anthropic executive reached out to someone at Palantir, wanting to know specifically how Claude was used.
“So they were trying to get classified information,” Michael said. “That’s a no-no.”
Allegedly, Anthropic implied they could essentially pull the plug on a military operation if they didn’t approve of it, Michael said; the senior Palantir exec who notified the Pentagon about the exchange was worried about the future integrity of military operations.
“They were implying that if they didn’t like the way it was used during that raid, that we might be violating the terms of service, and they may pull that software at any time or put a guardrail in to prevent an operation from happening, which is incredibly scary because then you’re putting real lives at risk,” Michael said. “It’s no joke, right?”
According to reporting by Semafor, Anthropic has said this accounting of its exchange with Palantir is “false.” A spokesperson said the company hasn’t expressed concerns to any industry partner “outside of routine discussions on strictly technical matters.”
We asked Michael if found Amodei “just sort of annoying” to work with, as the breakdown between the two seemed, to us, at least partly charged in cultural frustration.
The answer was yes. Not least of all because his “politburo,” a term referring to the principal policymaking committee of a communist party, dragged out the process, per Michael.