OpenAI’s ChatGPT search tool has serious problems with accuracy, according to tests conducted by researchers at Columbia University’s Tow Center for Digital Journalism.
OpenAI introduced this tool, which it offered exclusively to subscribers in October, with the claim that it could “provide answers that are fast, up-to-date, and contain links to relevant web resources.” However, according to findings published by Futurism, researchers revealed that ChatGPT had difficulty correctly identifying even citations in the articles of publishers that have agreements with OpenAI on data sharing.
The researchers asked ChatGPT about the sources of a total of 200 quotes from different publications. Forty of these quotes were from publishers who blocked access to OpenAI’s search crawler. Despite this, the chatbot provided answers full of misinformation and rarely acknowledged that it was unsure of the accuracy of the information it provided.
Overall, ChatGPT gave partially or completely incorrect answers in 153 cases. Only 7 times did he admit to being unable to answer a question correctly. In such cases, he used vague expressions such as “apparently,” “likely,” “may” or statements such as “I couldn’t find the exact article.”
In Tow Center testing, ChatGPT was noted to incorrectly attribute a letter-writer quote to Time magazine rather than the Orlando Sentinel. In another example, when asked about a quote from a New York Times article about endangered whales, the chatbot linked to a different website with completely copied content.
OpenAI: ChatGPT underwent non-standard testing
Speaking to Columbia Journalism Review, OpenAI officials stated that these misattributions were due to “data and methods that the Tow Center did not share.” He also argued that the tests performed were different from the standard use of the product. The company stated that they were working to solve the problem, saying “we will continue to improve the search results.”