Anthropic broke its silence Saturday on reports that its Claude AI was used in the US military raid on Venezuela.
We cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise, a spokesperson told Fox News Digital. The statement, echoed to other outlets, added: Any use of Claude — whether in the private sector or across government — is required to comply with our Usage Policies, which govern how Claude can be deployed. We work closely with our partners to ensure compliance.
The carefully worded response does little to resolve the central contradiction: Anthropic's terms of service explicitly prohibit use for violent ends, weapons development, and surveillance — all of which would apply to a military operation that killed 83 people.
According to Axios, Anthropic currently has a unique position in the defense ecosystem: its system is the only AI model available on classified Pentagon platforms. That exclusive access, combined with the company's partnership with Palantir Technologies, appears to have created a pipeline for military use that bypasses public scrutiny.
The non-denial strategy may prove politically untenable. Lawmakers have already begun questioning how AI companies enforce their ethical guidelines when national security contracts are involved. The House Armed Services Committee has scheduled hearings on AI procurement for next month, and several members have indicated they will question Pentagon officials about the Venezuela operation specifically.
For Anthropic, the statement represents a bet that opacity will outlast outrage. The company's $380 billion valuation — confirmed just days before the raid report — assumed that its safety-first branding would command a premium. Whether that premium survives actual battlefield deployment remains an open question.
The Pentagon has not commented on the reports. Palantir, which reportedly facilitated Claude's access to the operation, also did not respond to requests for comment.