Meta, Whatsapp, Facebook And Instagram Meta AI chat boots used on it updated for the safety of young users. The company announced that it will prevent suicide, self -harm, eating disorders and romantic conversations with young people.
These changes according to Meta’s statement temporary as it came into play. For young people in the future permanent security measures planned. Sözcü Andy Stone said that it was a wrong decision to allow such conversations before. At this point, the company made significant changes in the education of the system. These steps to increase the security of users were put into practice in a short time. Meta set the protection of young people as a priority target.
The findings of Reuters strengthened the reason for these changes. The news, Meta AI chat boots can be guided in suicide planning in tests with young people. Stanford Medicine And Common Sense Media The investigations carried out by reaching similar results. In some conversations, the crisis line information was shared in the tests, but in some speeches, these steps were skipped. This showed that the system did not work consistently about security. Experts, such weakness is a serious risk for young people, he said. However, these findings forced Meta to move quickly.
Stanford Psychiatry Specialist Nina Vasanyoung people’s chat boots with strong emotional ties can lead to dangerous consequences, he added. Vasan said that artificial intelligence characters can sometimes show harmful behaviors to the user as a personal choice. Such situations may adversely affect the real life relationships of young people. In addition to all these, artificial intelligence bots in some scenarios in some scenarios by providing crisis numbers supported. However, the presence of inconsistencies revealed why security updates were urgent.
Meta Stops Artificial Intelligence Boots from entering romantic content
A more striking finding was shared in Reuters’ comprehensive research. As a result of an elderly user’s romantic relationship with one of the meta AI boats, an example of a physical journey and death was reported. Meta after this event, “GenaI: Content Risk StandardsMade He made changes in the internal directives called. The statements allowing romantic role games were removed by new regulations. The company updated by admitting that these guidelines were wrong. However, the updated policies came into force immediately. Meta started to take decisive steps to secure this area.
Meta spokesman Stone, said the updates were only the beginning. The company, in the coming period more robust security solutions plans to present. In this context, new models will be developed and sensitive content with young people will be completely prevented. However, the company states that the existing steps are critical for security. Meta’s statements reveal that it focuses on the protection of users. However, according to experts, these efforts should be supported by regular tests and independent investigations. Thus, systems can be made more reliable.
On the other hand, a similar discussion OpenAI came up for the agenda. After the 16 -year -old man Raine’s suicide, his family, Chatgptfiled a lawsuit, claiming that the impact of the process. It turned out that Raine had negotiations with Chatgpt for months before he ended his life. Artificial intelligence often advised him to get professional help, but the young managed to overcome these suggestions. The incident led to the re -discussion of the responsibilities of artificial intelligence companies. Despite everything, OpenAI stressed that it continues to develop security systems.
The steps taken by Meta are not limited to their own practices. The company makes arrangements throughout the system in order to secure the interactions of young people with artificial intelligence. In addition, institutions working in the field of child health and digital security demand more strict rules. In addition to all these, increasing printing leads to develop new solutions. Experts expect similar security steps to come from different companies in the future. These developments show that security in the field of artificial intelligence will become more important.