
Don’t miss the full story, whose reporting from Matt O’Brien at The Associated Press is the basis of this artificial intelligence-assisted article.
Anthropic’s refusal to allow its Claude AI to be used for autonomous weapons and mass surveillance has triggered a government standoff, a consumer backlash against rivals and a broader debate about AI’s readiness for military use.
• The Trump administration designated Claude a supply chain risk and ordered agencies to stop using it after Anthropic CEO Dario Amodei refused to lift ethical safeguards around autonomous weapons and domestic surveillance.
• Claude surpassed ChatGPT in U.S. phone app downloads for the first time, becoming the most popular iPhone app starting Saturday and the top app across all phone systems on Monday.
• Anthropic has said it will challenge the Pentagon in court once it receives formal notice of the penalties imposed against it.
• Former Navy pilot and robotics expert Missy Cummings criticized Anthropic, arguing the company helped create the problem by overhyping AI capabilities before resisting military applications.
• Cummings published research arguing generative AI should be prohibited from controlling weapons, citing AI’s tendency toward errors — known as hallucinations — that make it “inherently unreliable” in life-or-death situations.
• Anthropic had been the only major AI company approved for use in classified military systems, partnering with Palantir and other defense contractors before the fallout.
• OpenAI announced a deal with the Pentagon to replace Anthropic’s Claude in classified environments, triggering a 775% surge in one-star ChatGPT reviews on Saturday, prompting CEO Sam Altman to publicly acknowledge the rollout was rushed.
• President Trump indicated the Pentagon would have six months to phase out Anthropic’s military applications, announced around the same time he approved military strikes on Iran.
• Anthropic didn’t immediately respond to a request for comment. The Defense Department declined to comment on whether it is still using Claude, including in the Iran war, citing operational security.
This article is written with the assistance of generative artificial intelligence based solely on Washington Times original reporting and wire services. For more information, please read our AI policy or contact Steve Fink, Director of Artificial Intelligence, at sfink@washingtontimes.com
The Washington Times AI Ethics Newsroom Committee can be reached at aispotlight@washingtontimes.com.
















