The US government continues to increase security measures in the field of artificial intelligence technology as it rapidly develops. As part of these efforts, two leading companies in the field of artificial intelligence, OpenAI and Anthropic, have agreed to make new AI models available to the US government before they are released for public use. The companies formalized this decision by signing a memorandum of understanding with the US AI Safety Institute. This agreement allows the government to review AI models both before and after they are released.
The security risks of artificial intelligence technology have recently become more important to both technology companies and governments. This step was taken to better assess the potential risks that artificial intelligence models may cause and to identify and eliminate potential problems in advance. With this collaboration, the US government aims to accelerate its work to ensure the security of artificial intelligence technology. At the same time, it is planned to establish an international security network by cooperating with relevant institutions in the United Kingdom.
An important step for AI security
The US government’s access to AI models is an important step in terms of regulating and securing the technology. This step comes at a time when there are ongoing discussions on increasing regulation of AI technology at the federal and state levels in the US. The rapid development of AI requires legislative bodies to be more meticulous about what kind of regulations will be implemented in this area. However, it is also important to take security measures without hindering innovation in this process.
Last Wednesday, the “Innovation for Safe and Trustworthy AI Models Act” (SB 1047) was passed in the US state of California. This law requires AI companies to take certain security measures before working on advanced AI models in particular. The law focuses on large-scale AI projects, while providing some flexibility for small-scale open source developers. However, this regulation has been criticized by many AI companies, especially OpenAI and Anthropic. The companies express concern that such regulations could stifle innovation. The law is awaiting approval by California Governor Gavin Newsom.
Meanwhile, the White House continues to work with major AI companies on voluntary commitments. Several leading companies have made voluntary commitments to invest in cybersecurity and anti-discrimination research, as well as to develop flagging technologies to prevent abuse of AI-generated content.
Elizabeth Kelly, Director of the US Artificial Intelligence Security Institute, said these new agreements are just the beginning, but they are an important milestone in managing the future of AI responsibly. Kelly said she believes that the government and technology companies can play a more active role in ensuring the security of AI technology through this collaboration.
OpenAI and Anthropic’s collaboration with the U.S. government represents an important step toward the responsible development and use of AI technology. As the future of AI is shaped by such collaborations and regulation, discussions about the technology’s safety and ethical use are likely to continue.