OpenAI CEO Sam Altman unveiled a reworked agreement with the Pentagon on Monday night governing the Defense Department’s use of its AI services, which he says provides stronger guarantees that the military won’t use OpenAI’s systems for domestic surveillance.
The new agreement states that “the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals,” according to a post on OpenAI’s website. OpenAI had faced some backlash as news of an initial agreement between the leading AI company and the Pentagon emerged Friday. Many observers claimed the original language shared on OpenAI’s website provided ample loopholes for the government to surveil Americans.
The move comes after weeks of intense debate between rival AI company Anthropic and the Pentagon over how the military can use advanced AI systems. While the Defense Department had wanted Anthropic to agree to let the department use its systems for “any lawful purpose,” Anthropic maintained its systems could not be used for domestic surveillance or to control deadly autonomous weapons. Until last week, Anthropic was the only major AI company whose services were cleared for use on classified networks.
Researchers argue that without guardrails, AI could allow authorities to monitor individuals with unprecedented speed and accuracy, combing through mountains of digital data to track people’s movement and behavior.
“It is critical to protect the civil liberties of Americans,” Altman wrote in a post on X on Monday night announcing the new contract language that he said better limits domestic surveillance. “The Department also affirmed that our services will not be used by Department of War intelligence agencies (for example, the NSA).”
Katrina Mulligan, head of national security partnerships for OpenAI, added in another post on X on Tuesday morning that “defense intelligence components are excluded from this contract,” stipulating that she would be open to future work with the NSA “if the right safeguards were in place.”
OpenAI did not respond to a request for comment.
Many observers remained unswayed Tuesday, concerned that the snippets of OpenAI’s contract with the Pentagon published by the company remained purposefully vague and provided carve-outs for domestic surveillance by various intelligence agencies within the Defense Department. The full text of the contract has not been released publicly.
“OpenAI has said that the Department of War contractually agreed not to use ChatGPT in agencies that surveil American people,” said Brad Carson, a former congressman and general counsel of the Army who now leads the Washington, D.C., policy group Americans for Responsible Innovation. “They have been happy to show contract language when it benefited them, but they refuse to release to the public this contractual provision.”
“I’ve reluctantly come to the conclusion that this provision doesn’t really exist, and they are just trying to fake it,” Carson told NBC News. Carson recently founded an AI-focused super PAC that has received $20 million from OpenAI rival Anthropic.
Several legal experts agreed that greater transparency about the entire contract and any other key clauses is necessary to properly evaluate the company’s claims.
“We still need to see the whole contract to say anything with a reasonable level of confidence,” said Brian McGrail, senior counsel at the Center for AI Safety, a nonprofit research and advocacy group. “It’s definitely a step in the right direction, and I do want to give OpenAI some credit.”
OpenAI’s agreement with the Pentagon was announced shortly after Defense Secretary Pete Hegseth said he would label Anthropic, which had long been in contract negotiations with the Pentagon, a supply chain risk to national security. Anthropic said the designation, which would force the Pentagon and contractors to stop using Anthropic’s services for defense purposes, has never before been publicly applied to an American company.
At an event in Sausalito, California, on Monday, retired Gen. Paul Nakasone, the former director of the National Security Agency and U.S. Cyber Command, said that the Pentagon should work to incorporate all leading American AI companies’ technology into national defense.
“We need Anthropic, we need OpenAI, we need all of our large language model companies to be partnering with our government,” Nakasone, who is a member of OpenAI’s board of directors, said at a conference sponsored by the Aspen Institute. “I think the supply chain piece is not good. The discussions over the weekend and the tenor of those discussions were tough for me to listen to. As an American citizen, someone who served in government, I just think that it’s not right, OK? This is not a supply chain risk.”

Anthropic had long maintained that the Defense Department could not use its AI systems for domestic mass surveillance or for direct use in autonomous weapons, though it added concessions for the military to use its systems for cyber and missile defense purposes in December. After a meeting between Anthropic CEO Dario Amodei and Hegseth last Tuesday, the Defense Department issued an ultimatum for Anthropic to reach an agreement by 5 p.m. ET that Friday.
However, on Thursday, an Anthropic spokesperson told NBC News that the Defense Department’s latest “language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will.”
But as Anthropic’s relationship with the Defense Department broke down, OpenAI’s deepened, with Friday’s announcement of a contract adding a fresh round of intrigue to a story that had already captivated much of the tech and defense community. In his post Monday night, Altman said the rush to ink a deal made the negotiations look “opportunistic and sloppy” even though OpenAI was “genuinely trying to de-escalate things and avoid a much worse outcome.”

