A federal appeals court decided the Department of War can designate Anthropic as a supply-chain risk while a review occurs. This ruling contradicts a California court's prior temporary injunction protecting Anthropic. The designation effectively blacklists Anthropic, restricting its government business due to supply-chain concerns. This stems from Anthropic's refusal to modify its AI model, Claude, by removing safety restrictions. These restrictions prevent its use for mass surveillance and autonomous weapons development. Anthropic prioritizes "constitutional AI" and ethical deployment, including these safeguards. The Pentagon wants unrestricted access to Claude for all legal military uses, igniting the conflict. Acting Attorney General Blanche praised the appeals court decision as a win for military readiness. Anthropic, an AI leader backed by major investors, emphasizes safe and reliable AI development. This case marks the first use of such a designation on a U.S. AI company. The dispute highlights tensions between AI ethics and defense technology access, with ongoing litigation expected.
zerohedge.com
zerohedge.com
Create attached notes ...
