The Anthropic stock fallout that investors and tech watchers feared is now fully underway. In a dramatic turn of events on February 28, 2026, President Donald Trump ordered every federal agency to immediately stop using technology from Anthropic, the San Francisco-based AI company behind the Claude model. The directive follows days of escalating conflict between Anthropic CEO Dario Amodei and Defense Secretary Pete Hegseth over the military’s demand for unrestricted access to the company’s AI tools — a demand Anthropic flatly refused.
The political shockwave adds a volatile new layer to an already turbulent stretch for a company that had been riding one of the most explosive revenue growth curves in Silicon Valley history.
Want to stay ahead of the biggest AI and tech stories shaping the market? Bookmark this page and keep reading for the latest developments.
How the Crisis Unfolded
The standoff began when the Pentagon demanded that Anthropic agree to give the U.S. military “unfettered access” to its AI tools for “all lawful purposes.” Anthropic pushed back hard, drawing a clear line on two specific use cases: mass domestic surveillance and fully autonomous weapons. The company’s leadership argued that no amount of government pressure would move it on those principles.
Defense Secretary Pete Hegseth escalated matters by declaring Anthropic an immediate “supply chain risk” — a designation with serious commercial consequences. Under that label, any business that contracts with the military would be prohibited from engaging in “any commercial activity with Anthropic.” Hegseth announced the designation on social media, not through formal legal channels, which Anthropic says it had not yet received any official notice of.
President Trump then piled on via Truth Social, writing that the government does not need, want, or plan to do business with Anthropic again. He also warned the company to cooperate during the six-month phase-out period or face the “Full Power of the Presidency” — including what he described as potential civil and criminal consequences.
Anthropic Fights Back
Rather than capitulate, Anthropic announced it would challenge any supply chain risk designation in court. The company argued the move would be “legally unsound” and set a dangerous precedent for any American business that negotiates with the federal government. Anthropic also said that prior to Trump’s announcement, it had already told the Defense Department it would support a smooth transition to another provider if the government chose to walk away.
Anthropic’s public statement was pointed. The company said that intimidation from what the president has called the “Department of War” would not change its stance on the two core issues — domestic surveillance and autonomous weapons. That framing drew attention for its deliberate use of the administration’s own rhetorical language.
What the Supply Chain Risk Label Actually Means
The practical impact of the supply chain risk designation is narrower than it might initially appear. Anthropic clarified that the prohibition only directly affects companies that hold military contracts and use Anthropic for work performed on behalf of the Department of Defense. Broader commercial customers outside the military-industrial space are not currently impacted by the order.
That said, the uncertainty surrounding the designation is itself creating commercial friction. Companies that work across both the private sector and government defense contracts now face the decision of whether to continue using Anthropic tools at all — even for non-military work — out of caution.
OpenAI Moves In
While Anthropic was battling the White House and Pentagon, OpenAI moved quickly. The company announced an expanded partnership with the Pentagon, signing a new deal that positions it as the preferred AI provider for defense applications. OpenAI CEO Sam Altman also said he aimed to help “de-escalate” tensions between the AI industry and the Pentagon, a statement that reads as both diplomatic outreach and competitive positioning.
The contrast between the two companies could not be sharper. While Anthropic is preparing legal challenges and holding firm on civil liberties guardrails, OpenAI is signing Pentagon contracts and taking on the federal business Anthropic has now lost. Some OpenAI employees publicly voiced support for Anthropic’s position, acknowledging the difficult choices involved in AI ethics, but the corporate moves tell a different story.
The Broader Market Context
This political crisis lands on top of an already bruising period for the tech and software sectors tied to Anthropic’s rapid expansion. Earlier in February, the launch of Claude Cowork — an AI tool designed to automate document management, file organization, and cross-platform workflows — triggered a significant software stock selloff. Companies like Thomson Reuters, LegalZoom, Experian, and others saw their shares plunge as investors priced in the risk of AI-driven displacement.
Thomson Reuters suffered its largest single-day stock drop on record. The iShares Expanded Tech-Software Sector ETF fell more than 14% over six consecutive sessions. Software sentiment, according to analysts at Jefferies, reached its worst levels ever.
Despite this market disruption, Anthropic’s own growth has been staggering. The company’s annualized revenue hit $14 billion in January 2026 — a more than tenfold increase from the $1 billion it was generating just twelve months earlier. It raised $30 billion in private funding and has reportedly signaled plans to go public at some point in 2026. Anthropic remains a private company, so direct investment in Anthropic stock is not currently available to the public. Investors seeking exposure to its growth have largely done so through Amazon, which holds a significant stake in the company and powers its AI tools through Amazon Web Services.
The Legal Battle Ahead
Anthropic says it had not received any formal notice from the White House or the military about the status of its negotiations at the time of Trump’s announcement. That procedural gap may become central to any legal challenge the company pursues. Labeling a private company a supply chain risk without formal notice — and announcing the designation via social media — is unusual and legally untested territory.
Legal experts watching the situation say Anthropic’s challenge will likely argue that the process violated due process norms and that the designation lacks a valid statutory basis for broad commercial prohibitions. The outcome of that challenge could set significant precedent for how the government deals with AI companies that push back on military demands.
What Comes Next
The six-month phase-out period gives federal agencies until late August 2026 to remove Anthropic tools from their systems. Whether Anthropic’s legal challenge will halt or delay that process remains to be seen. Sam Altman’s offer to help de-escalate the broader tension between the AI industry and the Pentagon could also shape whether other AI companies face similar pressures in the future.
For investors watching the market, the fallout underscores a reality that many had started to grasp over the past several weeks: Anthropic is no longer just an AI research lab. It is a company at the center of some of the most consequential political, commercial, and legal battles in technology today.
If you’ve been watching this story unfold, drop your thoughts in the comments below — this situation is moving fast and your perspective matters.