Technology
Danish Kapoor
Danish Kapoor

More than 1 million users raise topics of suicide on ChatGPT every week

OpenAI has published eye-catching information about the number of mental health-related conversations ChatGPT users are having. According to the company’s data, 0.15 percent of weekly users directly express suicide plans or intentions. Although this rate may seem small at first glance, the number grows considerably when you consider that ChatGPT has over 800 million weekly active users. It appears that more than 1 million people use expressions in this direction every week.

The data points not only to suicidal tendencies but also to users’ emotional attachment to ChatGPT. Some users can form an unusual bond in their interactions with artificial intelligence. However, it is stated that psychotic or manic symptoms are also frequently encountered in weekly conversations. Although such conversations are rare, according to OpenAI, their impact in terms of total number cannot be underestimated. Therefore, the company turned to improving the model to provide more thoughtful and balanced responses in the field of mental health. This whole picture shows that the company considers ChatGPT not only as an information tool, but also as a spiritual support mechanism.

With its GPT-5 model, ChatGPT can produce safer answers in conversations involving psychological support.

OpenAI consulted more than 170 mental health experts when developing the GPT-5 model. These experts state that more controlled and consistent responses are given compared to previous versions, especially in suicide-related content. According to the company’s internal evaluations, GPT-5 achieved 91 percent responsiveness to the desired behavior. Considering that the previous version remained at 77 percent at this rate, it seems that the new model gives more stable results. In addition, it is stated that the security vulnerabilities that may occur in long-term chats have been greatly reduced. However, these developments do not mean that all risks have disappeared.

In ChatGPT, not only conversations that pose a risk of suicide, but also non-suicidal mental health crises gain importance. That’s why OpenAI has included emotional addiction and non-urgent psychological problems in its new security tests. Thus, the model is aimed to be sensitive not only in critical moments but also in the field of general mental health. In addition, new steps are being taken for the safety of child users. OpenAI aims to detect children by developing a system that can estimate age and apply appropriate layers of protection. At this point, it is planned to increase parents’ control possibilities.

Regardless, older, less secure models of ChatGPT are still available to paid users. Especially the more flexible structures of previous versions such as GPT-4o may pose a risk for some users. This does not mean that the system is completely safe, despite the progress made in the development process. The company needs to develop a security approach that covers not only new models but also old versions. Otherwise, the potential effects on users’ mental health may remain out of control. This makes it necessary to consider tools such as ChatGPT not only in technical but also social aspects.


Danish Kapoor