Skip to content
Back to articles

Anthropic CEO to Pentagon: 'We Will Challenge in Court'

February 28, 2026ยท7 min readยท1,398 words
AIAnthropicDefenseRegulationVideo Summary
CBS News interview with Anthropic CEO Dario Amodei about the Pentagon supply chain risk designation
Image: Screenshot from YouTube.

Key insights

  • Anthropic holds firm on two red lines โ€” no domestic mass surveillance and no fully autonomous weapons โ€” while supporting 99% of Pentagon use cases
  • The supply chain risk designation has never been applied to an American company, only to foreign adversaries like Kaspersky Labs
  • No formal government action has been received โ€” only social media posts โ€” and Anthropic says it will challenge any designation in court
SourceYouTube
Published February 28, 2026
CBS News
CBS News
Hosts:CBS News correspondent
Anthropic
Guest:Dario Amodei (CEO, Anthropic) โ€” Anthropic

Read this article in norsk


In Brief

Anthropic CEO Dario Amodei sat down with CBS News hours after Defense Secretary Pete Hegseth declared the company a supply chain risk (a formal designation meaning the government considers a vendor a threat to critical defense systems) to national security. In a 27-minute exclusive interview, Amodei explained why Anthropic refuses to move on two specific restrictions (no domestic mass surveillance and no fully autonomous weapons) while offering its technology for what he describes as 99% of Pentagon use cases. He called the supply chain designation "retaliatory and punitive," revealed that no formal legal action has been received beyond social media posts, and stated that Anthropic will challenge any formal designation in court.

99%
of Pentagon use cases Anthropic supports
3 days
Pentagon's ultimatum to agree to terms
0
formal legal documents received by Anthropic

Background

This interview is the latest development in a rapidly escalating conflict between Anthropic and the Pentagon. The dispute began when the Department of Defense, renamed the Department of War under Secretary Hegseth, demanded unrestricted access to Anthropic's AI model Claude for "all lawful uses." Anthropic refused, drawing two red lines.


The two red lines

Amodei opens by emphasizing that Anthropic has been, in his words, the most forward-leaning AI company in working with the U.S. military. The company was the first to put its models on the classified cloud, a secure computing environment that meets military-grade security requirements, and the first to build custom models for national security. Claude is deployed across the intelligence community and military for digital defense operations and combat support.

But Amodei draws the line at two specific use cases (1:18):

1. Domestic mass surveillance. AI has made something new possible: buying large volumes of personal data on Americans (locations, personal information, political affiliations) from private companies, then analyzing it at scale (1:31). Amodei argues this was never useful before AI, so the law never addressed it. The Fourth Amendment, the constitutional protection against unreasonable government searches, has not caught up with what AI can do (9:53).

2. Fully autonomous weapons. This means weapons that select and fire at targets without any human involvement โ€” not the partially autonomous systems already used in Ukraine (2:08). Amodei makes two arguments against them. First, AI is simply not reliable enough. Anyone who works with AI models knows there is an unpredictability that has not been technically solved (2:38). Second, there is an oversight problem: if one person controls an army of 10 million drones with no human soldiers making targeting decisions, that concentrates power in dangerous ways (20:29).

Amodei adds that Anthropic is not categorically against autonomous weapons. If adversaries develop them, the U.S. may need them too. But the technology is not ready, and the oversight conversation has not happened yet (17:34). He says Anthropic offered to prototype these systems in a sandbox (an isolated test environment where the technology can be tried without deploying it in real operations) with the Pentagon, but the department was not interested unless it could do whatever it wanted from day one (18:04).


How the negotiations broke down

The Pentagon gave Anthropic a three-day ultimatum: agree to our terms or be designated a supply chain risk (3:22).

During that window, there were several rounds of back-and-forth. At one point, the Pentagon sent language that appeared to meet Anthropic's terms. But Amodei says it was filled with phrases like "if the Pentagon deems it appropriate," language that did not actually concede anything meaningful (4:04).

Pentagon spokesman Sean Parnell reiterated the government's position the day before: "We only allow all lawful use" (4:37). That position never changed.

Amodei says he offered to continue providing services during any transition, even if the government moved against Anthropic, because he is worried about the disruption to troops. Uniformed military officers told him that losing Claude would set them back six to twelve months, or possibly longer (6:11).


'Retaliatory and punitive'

Amodei's sharpest language is reserved for the supply chain risk designation itself.

He points out that this designation has, to Anthropic's knowledge, never been applied to an American company (12:40). Previous targets include Kaspersky Labs (a Russian cybersecurity firm with suspected ties to the Russian government) and Chinese chip suppliers. Being grouped with foreign adversaries feels punitive, Amodei says, given how much Anthropic has done for U.S. national security (12:59).

He also argues the Pentagon overreacted to what is essentially a contract disagreement. The normal response would be to choose a different vendor. Instead, the government extended its actions beyond the Department of War to other agencies, tried to revoke contracts outside the defense department, and used the designation to stop other private companies from using Anthropic in their military work (12:04).


'We will be fine' โ€” and we will go to court

Asked whether Anthropic can survive this, Amodei is blunt: "Not only survive, we're going to be fine" (25:14).

He says the actual legal scope of the supply chain designation is much narrower than Secretary Hegseth's tweet implied. The law only prevents companies from using Anthropic as part of their military contracts โ€” not from doing business with Anthropic at all (24:51). Amodei describes the tweet as deliberately designed to create "fear, uncertainty, and doubt" (25:30).

The most significant legal detail: Anthropic has received no formal documents whatsoever (26:33). Everything so far has come through tweets from the president and Secretary Hegseth. Amodei states clearly that when formal action arrives, Anthropic will challenge it in court (26:45).


Why AI is not like building aircraft

The interviewer pushes Amodei on a comparison to Boeing: Boeing builds planes for the military but does not tell the Pentagon how to use them. Why should Anthropic be different?

Amodei's answer centers on the pace of change. AI model capability doubles roughly every four months โ€” a speed of development the defense sector has never encountered (15:28). A general understands how an aircraft works because that technology has been stable for decades. AI is different.

He also argues this is a temporary problem. Congress only needs to catch up once on these two specific issues. The restrictions are narrow enough that legislation could address them without slowing down military AI adoption (16:12).

Amodei acknowledges the limits of his own position. The right long-term solution, he says, is not for a private company and the Pentagon to argue about this โ€” Congress needs to act (14:08).


'Disagreeing with the government is the most American thing in the world'

Asked what he would say to President Trump directly, Amodei frames Anthropic's position as patriotic (23:22). The company leaned forward in military deployment because it believes in defending America. The red lines were drawn to protect American values. And when threatened with unprecedented government action, Anthropic exercised its First Amendment rights to push back.

He also rejects the characterization of Anthropic as partisan. He points to attending an energy event with Trump in Pennsylvania, endorsing parts of the administration's AI action plan, and participating in a health AI pledge (21:34). Amodei describes Anthropic as "studiously even-handed" on political matters (22:17).

Asked to rate the chances of an agreement on a scale of 1 to 10, he declines: "I have no crystal ball." But Anthropic's position has not changed since day one, and the company remains willing to reach a deal โ€” under its red lines (22:43).


Glossary

TermDefinition
Classified cloudA secure cloud computing environment meeting military-grade security requirements. Only AI vendors with classified cloud access can serve sensitive defense operations.
Defense Production ActA U.S. law giving the government emergency powers to compel companies to prioritize national defense needs. Referenced as a potential tool to force Anthropic's compliance.
Fourth AmendmentPart of the U.S. Constitution protecting citizens against unreasonable government searches. Amodei argues AI-enabled mass data analysis is outpacing its legal protections.
Fully autonomous weaponsWeapons systems that select and engage targets without human involvement. Distinguished from partially autonomous systems currently used in conflicts like Ukraine.
Supply chain risk designationA formal assessment that a supplier poses risk to critical defense systems. Previously applied only to foreign entities such as Kaspersky Labs and Chinese chip suppliers.

Sources and resources