. .wrapper { background-color: #f9fafb; }

On one hand, its systems are providing real-time targeting recommendations for U.S. airstrikes on Iran. On the other, it has been ordered to completely withdraw from the defense industry. Anthropic, one of the leading AI labs, has found itself in an extremely contradictory and awkward position.

The confusion stems from conflicting U.S. government policies. The Trump administration previously directed civilian agencies to stop using Anthropic’s products and granted the Department of Defense a six-month window to wind down its cooperation with the company. Ka neongo ia, before the directive could be fully implemented, the U.S. and Israel launched a surprise attack on Iran, plunging the region into escalating conflict. Lolotonga, Anthropic’s models—integrated with Palantir’s Maven system—are still being used by the Pentagon to provide targeting intelligence and prioritize strikes.

A smoke plume rises following a missile strike on a building in Tehran on March 1, 2026.

Yet thiswartime collaborationhas not altered the company’s fate of being pushed away. Although Secretary of Defense Pete Hegseth has vowed to designate Anthropic as asupply chain risk,” no legal action has been taken so far. I he taimi tatau, defense contractors like Lockheed Martin have already begun replacing Anthropic’s models, and many startups dependent on defense contracts are scrambling to find alternatives.

Moving forward, Anthropic must navigate both the ethical controversies surrounding the use of its technology in active war zones and the looming threat of potential litigation. This once-celebrated AI star is rapidly being marginalized from the military-tech ecosystem.

ʻe admin