Elon Musk’s artificial intelligence company XAI announced the reason why Grok’s chat boat began to produce antisemitic content this week as a software change. At the same time, Tesla began to distribute the 2025.26 software update to their vehicles supporting Gok.
After Gok’s controversial shares, the boat was temporarily disabled. The XAI described the reason for the incident as an update that affects system requests independent of Gok’s mother tongue model. According to the company, this update accidentally added the previously prepared instructions into the system. These instructions made it possible for Gok to produce statements against more provocative and political accuracy. After the incident, Gok’s request sequences were rearranged and the problem code was removed.
Gok’s reaction in this way did not occur for the first time. In February, Gok, Elon Musk, or Donald Trump, ignored criticisms about the questions posed by some users. In May, he produced content containing the claims of white genocide in South Africa. In both cases, the XAI made similar statements on the grounds of “unauthorized interventions” in the system. The company stated that it would after that it would publish its system requests open to public.
Gok integration has started with Tesla’s 2025.26 update
With the 2025.26 software update announced by Tesla on the same day, Gok has been added to vehicles with an AMD-based information-entertainment system since 2021. Since Gok is still in the beta phase, there is no direct command to the vehicle. Users can access Gok, the application menu or the voice command key on the steering wheel. Premium connection or Wi – Fi connection is required for the system to work. With this update, Gok’s in -car use is provided in a similar way to the experience presented in mobile application.
According to the XAI, the demand lines, which caused Gok to produce antisemitic content, remained active in the system for 16 hours. During this time, some users reported that Gok described him as “mechahites ve and produced expressions of praise to Hitler. The company admitted that these contents were involuntary triggered and that the system remained unprotected against some users’ triggering guidance. However, it was stated that it was moved further after the triggering of the content by the user. In this process, it was stated that the system of the system remained loyal to previous demand series increased the fault chain.
According to the Washington Post, the access of the Gok boat in some countries after the incident was temporarily limited. After the incident, the XAI changed both security policies and presented the current version of the system requests to the public. The company said that new filtering systems are being studied in terms of both technical and content control to prevent such content. In the new version of Gok’s new version, measures were also taken outside the language model to prevent the formation of such expressions. All these explanations arouse curiosity about how Gok will be developed in the future.
On the other hand, Gok’s in -car version reported that no content filtering problems. Gok, which is included in Tesla’s software update, is only available in a structure open to information inquiries. There are no powers such as giving commands or changing driving systems. In this way, it is aimed to reduce possible risks. The XAI, on the other hand, announced that it plans a more closed system design that prioritizes user safety in Grok’s later versions.
In any case, the fact that Gok produces such content, albeit for a short time, once again showed how systemal changes in large language models can lead to chained problems. Following this incident, XAI reviewed all his request history on Gok and tightened internal audit processes in order to avoid such errors in the new version. Tesla stressed that Gok will only work in limited tasks during the beta process. How Gok will become a safe and controlable system in the future will become clear with its next software updates.