Technology
Danish Kapoor
Danish Kapoor

OpenAI and Google employees side with Anthropic against the Pentagon

The US Department of Defense (DOD)’s classification of the artificial intelligence company Anthropic as a “supply chain risk” caused wide repercussions in the technology sector. Following this decision, more than 30 researchers and engineers working within OpenAI and Google DeepMind signed a court statement supporting the lawsuit filed by Anthropic. According to the document filed in federal court, Google DeepMind chief scientist Jeff Dean is among the signatories. The statement emphasizes that the decision taken by the Pentagon may have serious consequences on both the sector and the scientific research environment.

The US government generally uses the “supply chain risk” label for foreign entities that may pose a threat to national security. Despite this, it was noteworthy that the Ministry of Defense used the same definition for a US-based artificial intelligence company. At the center of the debate are some limits set by Anthropic regarding the use of its own artificial intelligence technology. The company has said it does not allow its technology to be used for mass surveillance of Americans or to be involved in systems that autonomously trigger lethal weapons.

Limitations on the use of artificial intelligence created controversy

Pentagon officials, on the other hand, argue that artificial intelligence tools should be used for different purposes as long as they are “legal”. Despite this, Anthropic maintains that these uses pose serious ethical and security risks. This dispute became a crisis that directly affected the contractual relationship between the company and the US Department of Defense. In addition to all this, Anthropic filed two separate lawsuits against the Department of Defense and some federal agencies last week.

In the support petition submitted to the court, technology workers draw attention to a different point. According to the signatories’ assessment, if the Pentagon was not satisfied with the terms of the current contract, it could cancel the contract and choose to work with another artificial intelligence provider. Despite this, labeling the company as a “supply chain risk” is considered an unusual and controversial method in the industry. In addition, it is stated that such a step may negatively affect cooperation between the private sector and public institutions.

On the other hand, it was revealed that the Ministry of Defense quickly signed a new agreement with OpenAI after classifying Anthropic in this way. This development caused a reaction from some employees within OpenAI. Some employees called on the company management to support Anthropic and take a clearer stance against unilateral military use of artificial intelligence systems.

In the document included in the court file, it is stated that the technical and contractual limitations imposed by developers on the use of artificial intelligence systems are an important security mechanism. The fact that comprehensive legal regulations have not yet been clarified, especially in a period when artificial intelligence technologies are rapidly developing, increases the importance of such limitations. Yet federal agencies ignoring such limits could trigger a broader debate within the industry.

📡 Follow Teknoblog
In order not to miss the technology agenda, 📰 add it to Google News, 💬 join our WhatsApp channel, ▶ subscribe to YouTube, 📷 follow us on Instagram and 𝕏 X.

Danish Kapoor