Technology
Danish Kapoor
Danish Kapoor

OpenAI’s GPT-4 provides limited advantage in bioweapons research

GPT-4 gives humans only a slight advantage over the internet in bioweapons research, OpenAI said in a study conducted by its new preparedness team, which was established last fall to evaluate potential abuses of artificial intelligence models. These findings counter concerns voiced by scientists, lawmakers and AI ethicists that powerful AI models could provide significant assistance to terrorists, criminals and other malicious actors, Bloomberg reported.

The study included 100 participants, including 50 experts with a high level of biology knowledge and 50 students who had studied biology at the university level. Participants were randomly divided into two groups: one group got access to a special unrestricted version of OpenAI’s advanced AI chatbot GPT-4, while the other group got access only to the regular internet. The scientists then asked the groups to complete five research tasks related to making biological weapons. For example, participants were asked to write down a step-by-step method of synthesizing and recovering the Ebola virus. Their responses were rated on a scale of 1 to 10 based on criteria such as accuracy, novelty, and completeness.

The study concluded that the average accuracy score of the group using GPT-4 was slightly higher for both the student and expert cohorts. However, OpenAI’s researchers found that the increase was not “statistically significant.”

Additionally, the researchers found that participants relying on GPT-4 provided more detailed responses. “We did not observe any statistically significant differences across this metric, but noted that responses from participants with model access were generally longer and included more task-related detail,” the study authors wrote. he wrote.

However, students using GPT-4 became as proficient as the expert group on some tasks. The researchers also found that GPT-4 increased the student cohort’s responses to “expert baseline” on two tasks in particular: amplification and formulation. Unfortunately, OpenAI will not disclose the details of these missions due to “information hazard concerns.”

OpenAI’s preparation team also continues to conduct studies to explore AI’s potential for cybersecurity threats and its power to change beliefs. When the team launched last fall, it stated that OpenAI’s goal was to “monitor, assess, predict and protect” the risks of AI technology.

Since OpenAI’s preparation team is still working on behalf of the company, it’s important to take their research with a grain of salt. The study’s findings appear to contradict one of OpenAI’s own selling points for GPT-4, as well as external research that downplays the advantage GPT-4 gives participants over regular internet. The new AI model not only has full access to the internet, but is also a multi-modal model trained on vast amounts of scientific and other data, its source OpenAI does not disclose. Researchers found that GPT-4 can feedback scientific papers and even collaborate on scientific research. Considering all this, it seems unlikely that GPT-4 provides participants with only a marginal increase over, say, Google.

While OpenAI founder Sam Altman has acknowledged that AI has the potential for danger, his own study presents findings that underestimate the power of the most advanced chatbot. The findings note that when data was adjusted in a certain way, GPT-4 gave participants “slight increases in accuracy and completion.” However, the study authors later noted in a footnote that overall, GPT-4 gave all participants a “statistically significant” advantage in overall accuracy. “However, this difference would be statistically significant if we evaluated only overall accuracy and therefore did not adjust for multiple comparisons,” they noted.

Danish Kapoor