Trump backs Pentagon ban on Anthropic over national security concerns - AltcoinDaily.co
featured-image

In court documents filed Tuesday, the Trump administration maintained that the Pentagon’s blacklisting of Anthropic was legally sound, contesting the company’s significant lawsuits. 

The Claude developer initiated two federal lawsuits on Monday, arguing it faced illegal retaliation for advocating for AI safety. It insists that Pentagon officials are using the blacklist to punish it for refusing to drop protections against autonomous weaponization and surveillance, thus violating its First Amendment rights.

On March 3, Defense Secretary Pete Hegseth formally classified Anthropic as a supply chain risk over national security concerns.

Anthropic was blacklisted over national security concerns

In its court filing, the Trump administration noted, “For national security reasons, the terms of service for plaintiff Anthropic PBC’s artificial intelligence (AI) technology have become unacceptable to the Executive Branch. Anthropic concedes the Government’s right not ‘to use Anthropic’s services’ and to ‘transition to other AI providers.’”

It further contended that Anthropic’s First Amendment argument is a stretch and won’t stand up under legal scrutiny. It asserted that its actions were driven solely by national security concerns, not by a desire to punish the company for its AI safety views.

It also claimed that during its talks with the company, Anthropic’s overall attitude led them to second-guess whether it would be a good fit for the Department of Defense.

According to the filing, the Pentagon reportedly became concerned that Anthropic could pose a vulnerability to its defense supply chains. Reportedly, government officials fear the company might pull the plug on its systems in the middle of a conflict if it doesn’t like how the tech is being used.

Anthropic is concerned about giving the government autonomous force

Negotiations have stalled for months over Anthropic’s refusal to lift safety rules that prevent AI from being used in mass surveillance or automated combat. The AI firm has maintained that allowing “any lawful use” as requested by the DoD would go against its core safety principles and increase the risk of misuse of its Claude systems.

So far, anti-war activists have been hailing Anthropic as a hero for resisting the military. However, co-founder and chief executive Dario Amodei recently noted that the AI firm and the government broadly share the same objectives. Margaret Mitchell, an AI researcher and chief ethics scientist at the tech firm Hugging Face, even cautioned, “If people are looking for good guys and bad guys, where a good guy is someone who doesn’t support war, then they’re not going to find that here.”

Amodei also remarked, “Anthropic has much more in common with the Department of War than we have differences.” He shared his concerns about the dangers of AI-made bioweapons and Chinese interference, but he also believes AI companies have a duty to help governments win the tech war against autocracies.

Per his remarks, he’s less worried about AI being used in war and more terrified of a handful of people having the power to launch a massive, mindless, automated drone strike at the push of a button.

At the moment, the executive is balancing a firm “no” on autonomous weapons and domestic surveillance; however, the firm has otherwise been a highly cooperative ally of the US military. Previously, the company altered its AI models for the Department of Defense; they actually built Claude into the government’s most secure, classified networks, including satellite imagery systems, intelligence analysis, modeling and simulation, and operational planning

In its recent lawsuit, it even noted, “Anthropic does not impose the same restrictions on the military’s use of Claude as it does on civilian customers. Claude Gov is less prone to refuse requests that would be prohibited in the civilian context.”

Don’t just read crypto news. Understand it. Subscribe to our newsletter. It’s free.