Technology
Danish Kapoor
Danish Kapoor

OpenAI banned Chinese users’ attempts to edit CATGPT Supervision Outbreak Intermediary Codes

OpenAI, in a statement on Friday, using Chatgpt services for an artificial intelligence -supported social media surveillance vehicle for the code editing and extraordinary transactions of Chinese accounts announced. This prohibition decision was taken by identifying accounts that are active in a time period in compatible with certain business hours in China and operating on various social media platforms. The statement, the relevant accounts with manual intervention, without automation said that the functioning of the model said. In addition, these users claim that they transmit their data to the embassies and intelligence units other than China. Therefore, this shows that it is an international example for the abuse of technology.

The relevant operation is stated that OpenAI called “Peer Review .. According to the statements, these accounts operate as part of campaigns to identify content on social media using Chatgpt, including criticism against China. In this context, the language used by the accounts is Chinese, the intensity of use coincides with a certain period of time, and the fact that manual intervention patterns were detected. In addition, within this operation, some accounts, year-end performance reports, and reports on behalf of customers on behalf of the ransom e-mail content was claimed. These developments have reintegrated international digital security and surveillance discussions.

Artificial Intelligence -based surveillance tools increase global security concerns

In the evaluations about the details of this event, the OpenAI officials, similar situations have not been identified before, he said. Operators, social media by carefully monitoring and collecting data from different platforms to provide information to intelligence actors said. It is also claimed that the technological infrastructure used at the basis of the attempt was built on a META’s Llama series on an open source model. This reveals the risk of open source versions of artificial intelligence models for malicious purposes. As a result, the event has once again raised the dangers that the unhealthy use of artificial intelligence technology can create in the field of international security and freedom.

In the light of these developments, experts point to the necessity of making comprehensive evaluations on the ethical limits of technology. This harsh measure of OpenAI is interpreted as part of the efforts to prevent the abuse of technology. According to experts, disinformation and intelligence activities spread on digital platforms have risks beyond the advantages of artificial intelligence. In addition, digital security policies and ethical standards between different countries should be reconsidered urgently to prevent such events. Thus, it is aimed to establish balance between the innovations offered by artificial intelligence technology and the security needs of individuals and states.

On the other hand, these developments in the international arena can lead to reconsider technology companies’ control and responsibility policies. OpenAI’s explanations clearly reveal that artificial intelligence is not only a means of creativity and innovation, but also a means of social and political effects. At this point, both countries and private sector emphasizes the necessity of acting within the framework of ethical values. Therefore, as well as the opportunities of technological developments, the risks it brings should be meticulously analyzed and necessary measures should be taken. As a result, this prohibition decision once again reveals the importance of security and ethical responsibilities in the digital age.

Danish Kapoor