Gok, which was developed by Elon Musk’s artificial intelligence initiative XAI and positioned as a competitor to OpenAI’s Chatgpt, has recently come up with a controversial change. Users noticed that the chat boat refused to respond to resources that contain the expression “Elon Musk or Donald Trump is emitting false information .. This situation aroused a great impact on social media, the XAI team came to a statement on the issue.
Igor Babuschkin, the head of the company’s engineering team, said that the reason why Gok did not respond in this way was a change in the system. Babuschkin, however, said that this change took place outside the knowledge of the XAI administration and that the responsible person was an former OpenAI employee.
Babuschkin said on the X platform, stressed that Gok’s system prompt is open to the public and that users should see what these rules are. “An employee made a change by acting in good faith, but this certainly does not coincide with our values, Bab Babuschkin said, the update was damaged by Gok’s principle of impartiality.
Elon Musk describes Gok as an artificial intelligence in search of maximum righteousness, and says his main purpose is to “understand the universe .. However, the newest version of the artificial intelligence model, Gok-3, has created controversy with some recent answers. Users, Gok’s US President Donald Trump, Elon Musk and US Vice President JD Vance “the most damaging names to the country,” he said.
The XAI team intervenes in Gok answers
Following this development, it turned out that the XAI team intervened in the system to prevent Grok from making controversial comments about Trump and Musk. In fact, additional changes have been reported to prevent the conversation boat from saying that Musk and Trump deserve the death penalty.
This event brought new discussions on the principle of impartiality of artificial intelligence and the ability to make independent decisions. In particular, the issues such as how large language models and artificial intelligence systems are trained, which rules are subject to, and whom are controlled, are increasingly on the agenda.
On the other hand, how artificial intelligence models are directed on political or controversial issues and what information filtering. The change in Gok’s system also created question marks on how the limits of authority within the XAI are determined and what kind of decisions of employees can make.
Developments have further exacerbate the ongoing discussions in the world of technology about the extent to which artificial intelligence systems can be independent and how to take an attitude towards certain people or institutions. It continues to be a matter of curiosity how the companies operating in the field of artificial intelligence will follow the transparency and impartiality of their models.