The “Cancel ChatGPT” movement is getting bigger after the recent OpenAI move

UPDATE (March 1, 2026): I updated this article with comments from OpenAI CEO Sam Altman near the end of the episode.
There are no honest participants in the artificial intelligence race, but if there were, it might have been Anthropic.
There are no moral leaders in this space, which is sad. But at least, the famous Anthropic of Claude took a strong stand this week against the United States government, to the dismay of the Trump administration.
Anthropic was designated a supply chain risk this week, and was briefly and forcibly banned from use in US government agencies. Why? Anthropic said in a The blog post revolved around their two big red lines – no Claude AI for use in autonomous weapons, or mass surveillance of United States citizens.
It’s unlikely that mainstream governments of any stripe will salivate at the thought of turbo-charged AI mass surveillance, but it’s unlikely that a major tech company like Anthropic would be willing to take such a strong stand against it in an age where there is no good governance. But hey, there’s always someone willing to run into the abyss of symbolic behavior in the name of money.
Hello, Sam Altman.
OpenAI CEO Sam Altman and the interim manager have thankfully stepped in to free the US Department of Defense, pledging ChatGPT and other OpenAI technologies to this end.
In a post on X, Altman said OpenAI models would not be used for mass surveillance, but that claim was quickly contradicted by a US government official, who said OpenAI models would be used for “all legal purposes.” Mass surveillance of US citizens is legal “in certain circumstances” as part of the post-9/11 US Patriot Act, which allows for the mass harvesting of social media metadata, although some aspects of it have been curtailed in recent years.
Today was a CRAZY day in the AI space.Morning – Anthropic CEO Dario Amodei refused to cooperate with the Pentagon because he wants to use Claude for mass surveillance and autonomous killer robots. Afternoon – OpenAI’s Sam Altman came out in support saying, “In all…February 28, 2026
Anthropic wanted to control how its technology would be used, as opposed to relying on defining the rules and functions of a legal framework that is still a matter of debate. Altman by contrast is happy to let the US government decide how OpenAI programs are used, which under certain parts of the Patriot Act could easily lead to mass surveillance of US citizens, directly or indirectly as part of provisions for surveillance of foreign citizens (which, by the way, is perfectly legal under US law.)
The move sparked an immediate backlash in the ChatGPT and OpenAI communities online, across threads with thousands of upvotes on reddit of users claiming to unsubscribe.
He is now training a war machine. Let’s see the proof of cancellation. from r/ChatGPT
Time to cancel ChatGPT Plus after three years. Anthropic got naked for being ethical, and Sam Altman just stepped into the Pentagon fund. from r/OpenAI
Let me get this straight: Anthropic refused to work with the DoW unless it could promise that its technology was not used for surveillance or assassination. The DoW said they need comprehensive skills. Anthropic declined to provide full access. OpenAI stands for Anthropic to ensure the safety of AI.…February 28, 2026
ChatGPT recently closed a $730 billion funding round for the company, with backers including Amazon, Softbank, and NVIDIA. Microsoft has said it will continue to work with OpenAI, despite saying in a recent FT interview that it will start building and releasing its own models.
Unfortunately, no other AI companies are willing to take action against mass surveillance or autonomous weapons. Google removed an explicit ban on the technology last year from its internal rules. Microsoft is cool with autonomous weapons too, as long as someone pulls the last trigger. Amazon has no restrictions other than vague “fair use” language, and Meta has not been shy about flirting with Pentagon military contracts. And we all know Palantir is perfect for you.
The genie is out of the bottle, so to speak. ChatGPT is good at human text simulation but even the best models tend to fail spectacularly even at child-like logic puzzles.
Are you looking forward to a world where these things are used to seeing things that don’t exist, easy-to-use artificial intelligence models that can determine whether or not you are a threat to national security?
As long as Sam Altman and his friends stay rich, they don’t seem to be doing much about it – or you.
UPDATE (March 1, 2026): Added comments below from OpenAI CEO Sam Altman about his transition to supporting the United States Department of Defense.
Since writing this, OpenAI and Sam Altman have been at work on damage control.
In an “AMA”-style Q&A session on X, Sam Altman said that the United States’ “Army Department” will respect the “red lines” outlined by OpenAI by not using AI technology for autonomous weapons or mass surveillance of United States citizens, although it remains unclear how these safeguards will be implemented and maintained.
He suggested that existing US law protects these cases by default, although legal experts have warned that surveillance of non-US citizens allows for the collection of data on US citizens in an indirect or incidental manner.
We deliver the system (including choosing which models to use), and they can use it in accordance with legal procedures, including laws and regulations regarding autonomous weapons and surveillance. But we get to decide what program we’re going to build, and the DoW understands that there are a lot of risks that we have…March 1, 2026
People don’t exactly buy it. It makes no sense for the Trump administration to publicly oppose Anthropic’s stated position, while jumping headfirst into OpenAI’s support. The main argument seems to be that OpenAI is happy to let the US Department of Defense interpret what “legal” means, while Anthropic wants to maintain full control over how its technology is used.
It seems that Altman is only relying on hopes and prayers that his technology won’t be used for nefarious purposes – which seems very ignorant, and very unreliable. The current US administration has shown that it is willing at least to stretch interpretations and precedents set forth in the US constitution and in all landmark legal decisions. I’m not sure why there’s any reason to expect that OpenAI technology won’t be covered under the guise of “national security,” an abuse of power that government agencies of all stripes have abused in the past and present.
As of this article, Anthropic’s Claude AI app has claimed the #1 top spot on ChatGPT for both Android and iOS. Claude AI is also available for Windows 11.
Join us Reddit at r/WindowsCentral to share your information and discuss our latest news, reviews, and more.




