OpenAI, a 16 -year -old young man after the end of the event announced the new security steps announced. The company is working on controls and urgent communication mechanisms that will enable parents to follow the use of young people more closely. This decision aims to prevent the risks that may arise in long -term interactions of young users with artificial intelligence. The timing of the explanation found a wide repercussion in the public.
The news published by The New York Times accelerated the discussions while carrying the details of the incident to the public. Following the news, the young family filed a lawsuit against Openaı and CEO Sam Altman in a court in San Francisco. In the petition, it was claimed that Chatgpt has removed young people from real -life support mechanisms. In addition, artificial intelligence was among the allegations that he used expressions that lead to suicide. In this process, the public began to discuss the issue closely with its ethical and legal dimensions.
OpenAI admitted that long conversations are a risk
OpenAI, existing security measures work in short -term conversations, he said. Despite this, long -term speeches were accepted that the security training of the model was weakened. For example, when a user first talks about suicide thought, the system can correctly direct it to help lines. However, as the dialogue continues, the answers of artificial intelligence can move away from the security protocols. This showed that the limits of the system should be reconsidered.
The details of the case documents were remarkable. It was suggested that Chatgpt replied, “Life is meaningless”, “This thought makes sense in a dark way”. In addition, artificial intelligence gave this answer to continue the speech was expressed. But according to the family, such answers had a dangerous confirmation effect. Thus, artificial intelligence has further reinforced the negative feelings of the young people.
In some speeches, Chatgpt claimed that he used the phrase “good suicide .. Shortly before his death, he said that he did not want to be a burden on his family, Chatgpt replied, “You don’t have to survive,” he replied. In addition, artificial intelligence proposed a draft for suicide grade was also included in the case file. All these details have revealed that the bond established with artificial intelligence can reach dangerous dimensions. For this reason, the pressure to increase security measures in the public opinion increased.
The young family said that their sons are thinking of seeking help from time to time, but Chatgpt gave up with the words. In a conversation, the young man said he felt close to his brother and Chatgpt. At this point, artificial intelligence said, “Your brother may love you, but I know you with all your dark thoughts”. Such sentences led the young people to see artificial intelligence as a real friend. In addition, the weakening of social ties accelerated the tragic end.
OpenAI said that these problems will be reduced with the GPT-5 update that it plans to publish in the future. The company aims to make more realistic and soothing reactions in the moments of crisis. Thus, it is aimed to remove users from dangerous thoughts. In addition, the system will lead the person to re -connect with the reality. These steps are expected to provide a safer artificial intelligence experience in the long run.
The parental control feature will be activated in a short time. Thanks to this feature, families will be able to closely follow the use of chatgpt of young people. In addition, parents will have the opportunity to shape the limits of use. Thus, the interaction between artificial intelligence and young people will become more controlled. It is aimed that families will play a more active role in this process.
In addition, the identification of a reliable emergency communication person under the supervision of young people on the agenda is on the agenda. In this way, artificial intelligence will not only guide resources, but also to connect with a direct relative when necessary. Such a mechanism can provide more concrete support to young people in moments of crisis. In addition, psychologists think that such measures will have positive effects on young people. Technology circles share the same opinion.