OpenAI is a leading name in the race to develop AI that matches human intelligence. However, employees at the company’s $80 billion research lab have not been shy about publicly voicing their serious security concerns. These concerns were made even more apparent in a recent report in The Washington Post, where an anonymous source claimed that OpenAI rushed security testing and celebrated before ensuring the security of its products.
An employee within the company summarized the situation by saying, “They planned the post-launch party without knowing if the launch was safe or not. We were not able to complete the process successfully.” Such claims reveal that OpenAI has serious security vulnerabilities.
OpenAI’s security woes aren’t just limited to the claims of anonymous sources. Current and former employees of the company recently signed an open letter demanding better security and transparency practices following the dismantling of its security team. This letter comes on the heels of the departure of co-founder Ilya Sutskever. Lead OpenAI researcher Jan Leike also resigned shortly thereafter, stating that the company’s security culture and processes took a backseat to flashy products.
OpenAI’s security policies and criticism
Security is a core element of OpenAI’s charter, which states that if AGI (Artificial Generative Intelligence) is developed by another competitor, OpenAI will help other organizations advance security. The company claims to be dedicated to solving the security issues inherent in a large and complex system. However, the warnings suggest that security has been deprioritized in the company’s culture and structure.
OpenAI’s current state is critical, but public relations efforts alone will not protect society. “We are proud of our track record of delivering the most capable and safest AI systems, and we believe in our scientific approach to addressing risk,” company spokesperson Taya Christianson told The Verge. “Given the importance of this technology, rigorous discussion is critical, and we will continue to engage with governments, civil society, and other communities around the world in pursuit of our mission.”
According to OpenAI and others who have studied this emerging technology, the security risks are significant. “Current frontiers of AI development pose immediate and increasing risks to national security,” a report by the U.S. State Department said. “The rise of advanced AI and AGI (artificial general intelligence) has the potential to destabilize global security in a manner similar to the introduction of nuclear weapons.”
Internal discussions and future plans at OpenAI
The alarm bells that rang at OpenAI after the boardroom coup that briefly ousted CEO Sam Altman last year did little to reassure staff. The board announced that Altman had been removed for “consistently dishonest communications,” leading to an investigation that did little to reassure staff.
OpenAI spokesperson Lindsey Held told The Washington Post that the GPT-4o launch “cut no corners” on security. But another unnamed company representative acknowledged that the security review timeline was compressed into a week. “We’re rethinking our entire methodology, realizing that this is not the best way to do it,” the representative said.
OpenAI announced this week that it will collaborate with Los Alamos National Laboratory to explore how advanced AI models like GPT-4o can securely aid bioscientific research. The same announcement repeatedly highlighted Los Alamos’ own security record, and noted that OpenAI has built an internal metric to track the progress of large language models toward artificial general intelligence.
This week’s security-focused announcements seem like defensive window dressing in the face of increasing criticism of OpenAI’s security practices. Clearly, OpenAI is in dire straits right now, but PR efforts alone won’t protect society. What really matters is the potential impact on people beyond the Silicon Valley bubble if OpenAI doesn’t continue to develop AI with strict security protocols. The average person has no say in the development of customized AGI, and they don’t get to choose how much protection they’ll receive from OpenAI’s creations.
“AI tools can be revolutionary,” U.S. FTC chairwoman Lina Khan told Bloomberg in November, but she noted that “currently,” there are concerns that critical inputs to these tools are “controlled by a relatively small number of companies.”
If the numerous allegations against its security protocols are true, this raises serious questions about OpenAI’s suitability for the role of AGI’s custodian. Allowing a group in San Francisco to control potentially society-changing technology is alarming, and now more than ever, there is a demand for transparency and security.