Anthropic, an AI company, faced pressure from the U.S. government, specifically the Department of Defense, regarding the use of its technology. The DoD demanded that Anthropic allow "any lawful use" of its AI products, but Anthropic refused due to ethical concerns about mass surveillance and autonomous weapons. This led to a standoff with a deadline for Anthropic to comply or face blacklisting. CEO Dario Amodei publicly explained the company's principled stance against unethical AI applications. President Donald Trump ordered government agencies to cease using Anthropic's technology after the company stood its ground. Defense Secretary Pete Hegseth followed up by designating Anthropic as a supply chain risk, effectively barring government collaboration. Surprisingly, this decision led to increased popularity for Anthropic among the general public and its app, Claude, soared in popularity. The Claude app experienced outages due to unprecedented user demand following the controversy. Anthropic’s stance garnered support from tech workers and celebrities, solidifying public approval. Anthropic responded to the blacklisting, expressing sadness and vowing to legally challenge the government's actions, reiterating its commitment to its ethical principles. The company stated it would not compromise on its stance despite any intimidation from the government's actions.
fastcompany.com
fastcompany.com
