The Pentagon is threatening to end its relationship with Anthropic entirely unless the AI company removes safety restrictions on military use of its models, according to an Axios report citing an administration official. The ultimatum marks a sharp escalation in the standoff between AI safety advocates and national security officials, coming less than a day after news broke that Anthropic's Claude AI was used in a classified Venezuela raid that killed 83 people.According to the Axios report, Pentagon officials have been negotiating for months with four major AI companies: Anthropic, OpenAI, Google, and xAI. They are pushing these companies to allow military use of their tools for what defense officials describe as all lawful purposes. This includes weapons development, intelligence collection, and battlefield operations.Anthropic has refused to agree to these terms, maintaining its existing safety policies that prohibit the use of its AI for weapons, surveillance, and violent ends. The Pentagon, according to the administration official, is getting fed up with the company's position and is now threatening to cut off access entirely.
Pentagon Threatens to Cut Off Anthropic Over AI Safety Restrictions
Listen to this article 1 min
0:00 / —:——
Get Breaking AI News
Don't miss major developments. Subscribe for breaking news alerts and weekly digests.