US Government Tightens AI Regulations Following Anthropic Dispute

The United States government is implementing new regulations for artificial intelligence (AI) suppliers, requiring companies seeking federal contracts to grant the government extensive rights to use their technologies. This move follows a significant clash with AI start-up Anthropic, which has raised concerns about government surveillance and military applications of AI.

According to draft guidelines from the US General Services Administration (GSA) reviewed by the Financial Times, federal contractors must provide an irreversible license for all lawful uses of their AI models. The proposed regulations also address bias in AI outputs, mandating that suppliers deliver “a neutral, non-partisan tool” that avoids ideological influences, particularly related to issues such as diversity, equity, and inclusion. This initiative appears to be a response to an executive order issued by former President Donald Trump, which criticized what he referred to as “woke” AI models.

Contract Dispute with Anthropic

The tightening of regulations coincides with a recent dispute between the US Department of War (DoW) and Anthropic. The conflict arose last week when Anthropic declined to provide the Pentagon with unrestricted rights to deploy its models, citing concerns over domestic surveillance and the potential use of lethal autonomous weapons. As a result, the Pentagon cancelled a contract valued at $200 million with the AI developer.

In a statement on social media platform X, DoW Secretary Pete Hegseth criticized Anthropic, calling the company’s actions a “master class in arrogance and betrayal.” He suggested that Anthropic’s true goal was to gain control over military operational decisions. Following this altercation, Trump directed federal agencies to terminate contracts with Anthropic and to implement a six-month phase-out period.

OpenAI Steps In

In the wake of Anthropic’s fallout, rival AI developer OpenAI quickly secured a deal to deploy its AI models within the Pentagon’s classified networks. CEO Sam Altman stated that the agreement would include amendments to ensure the technology would not be misused for domestic surveillance of American citizens. He emphasized that the contract includes “red lines” against mass surveillance and the use of autonomous weapons.

Shortly after the announcement of the deal, OpenAI’s hardware lead, Caitlin Kalinowski, announced her resignation. In her post on X, she acknowledged the crucial role AI plays in national security but expressed concerns about the implications of surveillance without judicial oversight. Kalinowski noted that decisions of such magnitude deserved more careful consideration than they received.

As the GSA prepares to finalize its guidelines, industry consultation is expected to play a significant role in shaping the final regulations. The ongoing developments underscore the growing tensions between AI companies and government entities regarding the ethical use of artificial intelligence technologies.