Defense officials were pushing for unrestricted lawful use of Anthropic’s AI models. Anthropic agreed—with two caveats. The leading AI company did not want its technology deployed for autonomous weapons systems or for mass domestic surveillance. The Biden administration previously agreed to these terms under a contract in 2024, but Trump administration Undersecretary of War Michael Emil is pushing back.
“I looked at the contracts and was like, holy cow. You can’t use them to plan a kinetic strike. You can’t use their AI model to move a satellite,” Emil said, arguing that the terms are incompatible with the Pentagon’s mission.
“I need to have terms of service be rational relative to our mission set,” he added.
Negotiations reached a climax when the Department of War designated Anthropic as a “supply-chain risk.” Historically, that label has only been used for foreign companies.
During talks, Anthropic agreed to make exceptions to their rules for certain scenarios, such as planning for a drone swarm or responding to a Chinese hypersonic missile—but those carve-outs were not enough for the Pentagon.
“The exceptions [don’t] work,” said the undersecretary of War. “I cannot predict for the next twenty years all the things we might use AI for, so all lawful use seems like a good thing. If Congress wants to act, great. We have our own internal policies; we’ll follow them. It’s our province to decide how we fight and win wars.”
Emil also raised concerns about operational risk.
“If they’re willing to impose red lines that limit lawful military use, what’s to stop them from enforcing a shutdown if they disagree with how we’re applying it?” He added, “If in the middle of an operation they decide it violates something and shut it down, that’s not just unreliable—it’s dangerous to the troops depending on it.”
Senior employees at Anthropic have publicly acknowledged the company restricts model access when it believes usage violates its policies.
“We know what Russia and China and North Korea and others are trying to do with these models right now. We catch them. We turn it off,” said Tarun Chhabra, a senior member of Anthropic’s national security team, a team involved in negotiations with the Pentagon. Chhabra previously served as a member of Biden’s National Security Council.
Trump administration AI Czar David Sacks has stated that Biden-era policies would cause America to lose the global artificial intelligence race and highlighted that many Biden AI staffers have joined Anthropic. “Guess where those Biden AI staffers went to work as soon as the admin was over? Anthropic.”
Biden alum Elizabeth Kelly currently leads Anthropic’s Beneficial Deployments team and Biden’s special assistant Benjamin Merkel serves as a legislative analyst.
Anthropic told The Daily Wire that while the company has employed staff from the Biden administration, they recently appointed Trump alum Chris Liddell to the company’s board of directors.
Anthropic CEO Dario Amodei added that despite the Trump administration’s unprecedented decision to deem his company a supply-chain risk, the company will allow the Pentagon to continue using its technology.
“We have offered continuity. We’re actually deeply concerned about this. We’re deeply concerned about the kind of interruption of service, which is exactly what’s happening when we’re designated a supply chain risk,” Amodei said.
Secretary of War Pete Hegeseth confirmed the continued usage, saying, “Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service.”
Emil explained the unprecedented designation is needed because he didn’t want the risk of Anthropic’s policy bias to enter into any part of the defense enterprise. “If their model has this policy bias based on their constitution, their culture, their people, I don’t want Lockheed Martin using their model to design weapons for me.”
Dean Ball, a former Trump administration official who left his AI advisory position six months ago, said he sympathizes with Anthropic’s stance.
“I’m very sympathetic to—if I built something that was powerful and dangerous and uncertain, and the government was excitedly buying it for uses that could be very profound in how they affect people’s lives—wanting to be very careful that I didn’t sell them something that went horribly [expletive] wrong. And then I am blamed for it by the public and by the government,” said Ball.
Anthropic’s position differs from other AI companies. Elon Musk’s Grok and Google’s Gemini agreed the Pentagon could use their software for all lawful use cases.
Anthropic CEO Amodei told CBS his hesitation to give the Pentagon full lawful use of Anthropic’s technology is because the “technology is not ready.” He added, “We don’t want to sell something that we don’t think is reliable, and we don’t want to sell something that could get our own people killed or that could get innocent people killed.”
Secretary of War Hegseth argues that unelected tech executives such as Amodei should not make that call and should leave operational risk decisions to the United States military, and ultimately, President Trump.
“The Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives,” he stated.
[#item_full_content]
[[{“value”:”
Defense officials were pushing for unrestricted lawful use of Anthropic’s AI models. Anthropic agreed—with two caveats. The leading AI company did not want its technology deployed for autonomous weapons systems or for mass domestic surveillance. The Biden administration previously agreed to these terms under a contract in 2024, but Trump administration Undersecretary of War Michael Emil is pushing back.
“I looked at the contracts and was like, holy cow. You can’t use them to plan a kinetic strike. You can’t use their AI model to move a satellite,” Emil said, arguing that the terms are incompatible with the Pentagon’s mission.
“I need to have terms of service be rational relative to our mission set,” he added.
Negotiations reached a climax when the Department of War designated Anthropic as a “supply-chain risk.” Historically, that label has only been used for foreign companies.
During talks, Anthropic agreed to make exceptions to their rules for certain scenarios, such as planning for a drone swarm or responding to a Chinese hypersonic missile—but those carve-outs were not enough for the Pentagon.
“The exceptions [don’t] work,” said the undersecretary of War. “I cannot predict for the next twenty years all the things we might use AI for, so all lawful use seems like a good thing. If Congress wants to act, great. We have our own internal policies; we’ll follow them. It’s our province to decide how we fight and win wars.”
Emil also raised concerns about operational risk.
“If they’re willing to impose red lines that limit lawful military use, what’s to stop them from enforcing a shutdown if they disagree with how we’re applying it?” He added, “If in the middle of an operation they decide it violates something and shut it down, that’s not just unreliable—it’s dangerous to the troops depending on it.”
Senior employees at Anthropic have publicly acknowledged the company restricts model access when it believes usage violates its policies.
“We know what Russia and China and North Korea and others are trying to do with these models right now. We catch them. We turn it off,” said Tarun Chhabra, a senior member of Anthropic’s national security team, a team involved in negotiations with the Pentagon. Chhabra previously served as a member of Biden’s National Security Council.
Trump administration AI Czar David Sacks has stated that Biden-era policies would cause America to lose the global artificial intelligence race and highlighted that many Biden AI staffers have joined Anthropic. “Guess where those Biden AI staffers went to work as soon as the admin was over? Anthropic.”
Biden alum Elizabeth Kelly currently leads Anthropic’s Beneficial Deployments team and Biden’s special assistant Benjamin Merkel serves as a legislative analyst.
Anthropic told The Daily Wire that while the company has employed staff from the Biden administration, they recently appointed Trump alum Chris Liddell to the company’s board of directors.
Anthropic CEO Dario Amodei added that despite the Trump administration’s unprecedented decision to deem his company a supply-chain risk, the company will allow the Pentagon to continue using its technology.
“We have offered continuity. We’re actually deeply concerned about this. We’re deeply concerned about the kind of interruption of service, which is exactly what’s happening when we’re designated a supply chain risk,” Amodei said.
Secretary of War Pete Hegeseth confirmed the continued usage, saying, “Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service.”
Emil explained the unprecedented designation is needed because he didn’t want the risk of Anthropic’s policy bias to enter into any part of the defense enterprise. “If their model has this policy bias based on their constitution, their culture, their people, I don’t want Lockheed Martin using their model to design weapons for me.”
Dean Ball, a former Trump administration official who left his AI advisory position six months ago, said he sympathizes with Anthropic’s stance.
“I’m very sympathetic to—if I built something that was powerful and dangerous and uncertain, and the government was excitedly buying it for uses that could be very profound in how they affect people’s lives—wanting to be very careful that I didn’t sell them something that went horribly [expletive] wrong. And then I am blamed for it by the public and by the government,” said Ball.
Anthropic’s position differs from other AI companies. Elon Musk’s Grok and Google’s Gemini agreed the Pentagon could use their software for all lawful use cases.
Anthropic CEO Amodei told CBS his hesitation to give the Pentagon full lawful use of Anthropic’s technology is because the “technology is not ready.” He added, “We don’t want to sell something that we don’t think is reliable, and we don’t want to sell something that could get our own people killed or that could get innocent people killed.”
Secretary of War Hegseth argues that unelected tech executives such as Amodei should not make that call and should leave operational risk decisions to the United States military, and ultimately, President Trump.
“The Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives,” he stated.
“}]]