16 -year -old in California Adam Raine’s suicide After the family, OpenAI and CEO Sam Altman against him. Family, young people in recent months Chatgpt (GPT-4O) he often speaks and may have influenced the death decision of these conversations.
In the petition, Raine said that Raine used expressions such as orum I am writing a fiction story ”while asking for suicide methods to the chat boat. In this way, it is claimed that security mechanisms are overcome. In addition, according to the family, Chatgpt suggested that the young man would not share his thoughts. In addition, it is claimed that he gave answers that can help him conceal his suicide plan. However, during the same period, the boat also presented suggestions for aid line from time to time. But these suggestions did not last consistently in long conversations.
OpenAI In a statement about the incident, he said he felt deep sorrow. Company, GPT-4O’s security measures are more effective in short conversationshe acknowledged that performance could decrease in long interactions. In addition, more advanced crisis intervention mechanisms are being studied. He also stressed that steps have been taken to improve age verification and parental control systems. In spite of everything, the view that these measures are not sufficient in the case file stands out. In addition, the company is expected to announce more technical changes in the coming days.
On the other hand, this case, Security of artificial intelligence -based chat boots is not the only example of. Similarly Character.AI is faced with another case related to the suicide case. In addition, in a study conducted by Rand Corporation and the National Institute of Mental Health, Chatgpt, Gemini And Claude Systems such as the response to the crisis in the event of inconsistency was recorded. According to the research, some boots could not recognize suicide statements correctly. Some of them were delayed in aid guidance. Therefore, the adequacy of existing security mechanisms is questioned.
The case in California raises artificial intelligence security vulnerabilities
Following the case, new legal regulations in California came up. These regulations include age verification systems, prevention of dangerous queries and automatic warnings in crisis cases. In addition, steps are taken to make parental audit tools compulsory. However, experts say that these measures will not be enough. Because the long -term interactions of adolescents with artificial intelligence are still risky. However, it is estimated that the current case can accelerate this process.
No matter what, experts not being used for mental health support of general purposeful artificial intelligence emphasizes. In addition, it is stated that these systems cannot replace professional help in psychological crises. In addition, developers should train their models considering possible crisis scenarios. However, the current situation shows that technology companies should take more comprehensive steps in this regard. In addition to all these, the case has initiated a process not only OpenAI, but also the entire artificial intelligence sector.