Over the last week, tensions between the Pentagon and artificial intelligence giant Anthropic have reached a boiling point.
Anthropic, the creator of the Claude chatbot system and a frontier AI company with a defense contract worth up to $200 million, has built its brand around the promotion of AI safety, touting red lines the company says it won’t cross.
Now, the Pentagon appears to be pushing those boundaries.
Hints of a possible rift between Anthropic and the Defense Department, now rebranded the Department of War, began to intensify after The Wall Street Journal and Axios reported the use of Anthropic products in the operation to capture Venezuelan President Nicolás Maduro.
It is unclear how Anthropic’s Claude was used.
Anthropic has not raised or found any violations of its policies in the wake of the Maduro operation, according to two people familiar with the matter, who asked to remain anonymous in order to discuss sensitive topics. They said that the company has high visibility into how its AI tool Claude is used, such as in data analysis operations.
Anthropic was the first AI company allowed to offer services on classified networks, via Palantir, which partnered with it in 2024. Palantir said in an announcement of the partnership that Claude could be used “to support government operations such as processing vast amounts of complex data rapidly” and “helping U.S. officials to make more informed decisions in time-sensitive situations.”
Palantir is one of the military’s favored data and software contractors, for example collecting data from space sensors to provide better strike targeting for soldiers. It has also attracted scrutiny for its work under the Trump administration and law enforcement agencies.
Though Anthropic has maintained that it does not and will not allow its AI systems to be directly used in lethal autonomous weapons or for domestic surveillance, the reported use of its technology related to the Venezuela raid through the contract with Palantir allegedly raised concerns from an Anthropic employee.

Semafor reported Tuesday that, during a routine meeting between Anthropic and Palantir, a Palantir executive was worried that an Anthropic employee did not seem to agree with how its systems might have been used in the operation, leading to “a rupture in Anthropic’s relationship with the Pentagon.”
A senior Pentagon official told NBC News that “a senior executive from Anthropic communicated with a senior Palantir executive, inquiring as to whether their software was used for the Maduro raid.”
According to the Pentagon official, the Palantir executive “was alarmed that the question was raised in such a way to imply that Anthropic might disapprove of their software being used during that raid.”
Citing the classified nature of military operations, an Anthropic spokesperson would neither confirm nor deny that its Claude chatbot systems had been used in the Maduro operation: “We cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise,” the spokesperson told NBC News in a statement.
The spokesperson pushed back on the idea that the incident had caused notable fallout, telling NBC News the company had not held out-of-the-ordinary discussions about Claude usage with partners or shared any mission-related disagreements with the military.
“Anthropic has not discussed the use of Claude for specific operations with the Department of War,” the spokesperson said. “We have also not discussed this with, or expressed concerns to, any industry partners outside of routine discussions on strictly technical matters.”
Palantir did not reply to a request for comment.
The core tension between Anthropic and the Defense Department appears to be rooted in a broader clash over the military’s future use of Anthropic’s systems. The Defense Department has recently emphasized its desire to be able to use all available AI systems for any purpose allowed by law, while Anthropic says it wants to maintain its own guardrails.
Chief spokesman for the Pentagon Sean Parnell told NBC News that “The Department of War’s relationship with Anthropic is being reviewed.”
