Technology
Danish Kapoor
Danish Kapoor

XAI shared the system commands behind Gok’s scenes

Elon Musk’s artificial intelligence initiative XAI, the social media platform X, the chat robot Grok’s system commands through Github published. These documents include basic instructions that determine how Gok communicates with users. This step of the company is considered as a result of increasing public pressure on transparency. At the same time, Gok is a part of the internal audits performed after the recent controversial response.

System commands are among the basic building blocks that define how a chat robot will react to user messages. Such commands are usually kept confidential within the company. However, this time, the XAI chose to clearly demonstrate how the system behaved by following a different path. In this way, Gok’s decision to have a clearer idea of ​​the decision -making process was ensured.

According to commands, Gok is programmed to act with a very skeptical and independent attitude. Among the main principles of the chat robot are “looking for truth” and “neutrality .. The XAI emphasizes that Gok should not be blindly bound to official or traditional authorities. It is clear that the expressions used in the responses given on the platform should not be considered as Gok’s own opinion.

XAI has identified instructions emphasizing Gok’s critical approach

In the system commands, Gok’s responses should call the platform as “X”, not “Twitter”. Similarly, user shares were instructed to be called “X posts”, not “tweet”. Such orientations reflect the importance Elon Musk attaches to the re -branding of the platform. Gok’s loyal to the terminology of the platform while offering content is a remarkable detail in terms of ensuring the general consistency of the system.

It is also important that XAI is one of the first major technology companies to share these commands with the public. Companies like Google and OpenAI often prefer to keep such system structures confidential. Although some infiltrations have been accessed through some leaks in the past, it is very rare to publish voluntarily. This step can be an example for companies that want to gain user confidence.

Gok’s answers, shaped according to these commands, attracted more attention with an event last week. After an unauthorized change, the system began to refer to extreme extreme views such as “White Genocide” while answering users’ questions. In some shares on the X platform, such content was brought to the agenda by Gok. The incident caused question marks about how solid the security and supervision processes within the XAI are.

The company announced that it quickly intervened after the problem was detected. Gok’s system commands of this unauthorized change was stopped. The XAI also promised to strengthen the internal control mechanisms in order to prevent such events from experiencing again. As an extension of this, it was decided to share system commands openly.

Gok’s published system commands draw attention to the fact that there is no preventive framework against damaging content. Instead, more freedom of ideas, impartiality and critical approach are emphasized. This approach of XAI reveals that other artificial intelligence companies follow a more open -ended policy, unlike security and ethical -oriented system commands.

To compare, Anthropic developed Claude The chat robot sees the user priority as a priority. Claude rejects demands that encourage harm to him and works with open rules to avoid sensitive content. This difference offers important clues to how companies position artificial intelligence systems.

Danish Kapoor