Earlier this month, the Pentagon threatened to invoke the Defense Production Act (DPA) or label Anthropic a “supply chain risk” should the company fail to remove its restrictions on acceptable use policy. Among the company’s limitations are restrictions on its AI tools being used in mass surveillance and autonomous weaponry. The Pentagon’s threat is an unprecedented step.
On February 26, Anthropic issued a statement that it would not comply with the demands. CEO Dario Amodei wrote, “Threats do not change our position: we cannot in good conscience accede to their request.”
The consequences of the Pentagon’s actions are deeply concerning, resembling the Chinese government’s own behavior around the development of AI. While the US government may set procurement rules, using additional threats to force American companies into changing the ethical boundaries of their products raises serious concerns. Mere promises that the government will not abuse these products are insufficient, and American companies should not be expected to surrender their ethics. As Hudson Institute’s Michael Sobolik wrote on X, the Pentagon’s threats resemble “how Beijing is approaching AI. It would invite a race to the bottom and would make America less democratic and less free.”
These proposed actions also have a chilling effect on the AI industry as a whole. They signal the US government’s willingness to invoke an outdated Korean War–era law to take control of the product of hundreds of billions of dollars in private investment. The Biden administration’s threat to invoke the DPA in its executive order on AI was highly and appropriately criticized by many, including myself, as executive overreach. That has not changed. But this particular action has among the government’s requirements that Anthropic change core ethical safeguards.
The alternative Pentagon threat of labeling Anthropic a “supply chain risk” is equally concerning and chilling. Such a label could hurt a leading American company’s ability to not only compete in the US but also harm the perception of the company or its products more globally. As with the DPA, such action could send an ominous message to industry about the US government’s willingness to impose domestic restrictions and internationally undermine its own technological leaders.
While it might only directly affect Anthropic, blacklisting the company would also signal to numerous other companies what the government is willing to do to force certain actions in their products, sending a rippling effect at this critical time in technological development. The result could hurt not only developments here in the US but also the global success of AI leaders like Anthropic. Not only might it signal that the technology is deemed “risky,” it could also result in other countries making their own concerning demands of America’s tech leaders with the threat of blacklisting as an alternative.
At the end of the first Harry Potter book, Dumbledore says, “It takes a great deal of bravery to stand up to our enemies, but just as much to stand up to our friends.” Anthropic similarly showed bravery not in standing up to the demands of a foreign adversary’s government but in standing up to our own government here in the US.

