OpenAI expanded its controls to prevent misuse of artificial intelligence technology in the last quarter of 2025. The company determined that a China-based account was using ChatGPT to design a social media surveillance tool. It was claimed that this account provided services to a government institution. The tool was based on a system that detected and analyzed political, ethnic and religious content. OpenAI closed all access to the account following investigation.
This tool was programmed to scan posts shared on platforms such as X, Facebook, Instagram, Reddit, TikTok and YouTube. In addition, it contained a special algorithm that categorized data according to themes determined by the operator. Although it has not been confirmed that the system is directly linked to the Chinese government, this step by OpenAI attracted attention. The company pointed out that such projects could disrupt the digital security balance at the global level. This statement shows that artificial intelligence control has become not only a technical but also an ethical issue.
OpenAI begins monitoring unethical AI initiatives more closely
According to OpenAI’s report, the blocked accounts were using ChatGPT not only for social media monitoring, but also to design systems to monitor specific communities. In particular, a project called “high-risk Uyghur-focused warning model” was stopped by the company. This model included a tracking mechanism aimed at tracking the movements of “Uyghur-related” individuals. Despite this, OpenAI emphasized that the use of user data in such political projects is unacceptable. In addition to all these, international human rights organizations also state that these initiatives violate ethical boundaries.
OpenAI has been sharing similar cases with the public through threat reports published since February 2024. These reports reveal how some state-linked groups are using artificial intelligence to strengthen cybersecurity attacks, improve phishing methods and produce propaganda. In addition, the company’s goal is not only to close accounts but also to raise awareness. Although artificial intelligence technologies are developed with a benefit focus, these examples reveal the risks that uncontrolled use may pose. In this way, technology communities act more consciously about the ethical framework.
Data announced for the last quarter showed that similar initiatives are continuing not only in China but also in different regions. Some Russian, Korean, and Chinese-speaking developers have tried to use ChatGPT to optimize malware. Additionally, some networks in Cambodia, Myanmar, and Nigeria were found to be creating fraud schemes via ChatGPT. OpenAI announced that these accounts were also systematically blocked. All this data makes it clear that artificial intelligence requires balance in both production and security areas.
According to data shared by OpenAI, ChatGPT is used three times more to detect fraud than to generate fraud. In addition, the majority of users prefer the model in areas such as auditing, data analysis and information verification. Still, malicious people do not stop looking for ways to use this technology for manipulation purposes. OpenAI deploys trained inspection models to prevent this situation. This process is considered an important step for artificial intelligence to be used both safely and responsibly.
Recently, the tendency to use artificial intelligence-based content as a tool to manipulate public opinion on social media platforms has increased. Some groups in Iran, Russia and China tried to deepen social polarization by sharing posts through fake accounts with the help of ChatGPT. In these campaigns, manipulative content produced on both local and international platforms was circulated. OpenAI detected these activities and shut down the interaction networks. According to the company’s statement, such interventions are of great importance for social balance as well as information security.
OpenAI’s reports show how risky the uncontrolled growth of artificial intelligence technology in the digital age can be. However, the company isn’t just fighting abuse; It also contributes to the establishment of ethical usage standards. With this approach, it is aimed to make future artificial intelligence applications more transparent and auditable. Besides all this, OpenAI’s determination also sets an example for other technology companies.
Despite everything, the measures taken for the responsible use of artificial intelligence are not considered sufficient on their own. Experts emphasize that cooperation at the international level has become mandatory. OpenAI’s consistent stance in this area is creating new awareness on a global scale, both from an ethical and security perspective. For this reason, artificial intelligence control is now considered an inevitable necessity in the digital policies of the future.
These steps taken by OpenAI set an important example for the advancement of technology in a human-centered and safe manner. The new reports that the company will publish in the coming period are expected to provide a more comprehensive perspective on the social impacts of artificial intelligence. Although technological development continues rapidly, it is emphasized that ethical values must be preserved as the fundamental basis of this progress.