Danish Kapoor
Danish Kapoor

Google clarifies controversy surrounding Gemini AI rendering tool

Google’s artificial intelligence-based visual production tool Gemini has been the focus of a remarkable discussion recently. The tool produced inaccurate and inappropriate images depicting racially diverse Nazi figures and historical figures such as the founding fathers of the United States, as well as diverse U.S. senators from the 1800s. This has raised concerns about the accuracy and precision of the images produced by Google’s artificial intelligence.

Problems with Gemini AI’s visual production

Google published a blog post about these events and stated that what happened was due to settings problems. Prabhakar Raghavan, Google’s senior vice president, stated that Gemini’s effort to show a variety of people falls short of taking into account situations that clearly should not be shown. Additionally, over time, the model became overly cautious, beginning to delicately misinterpret even some simple commands.

In particular, the images Gemini produced for the “create a U.S. senator from the 1800s” command, featuring women of color and an Asian American man, showed that the AI ​​went overboard in some cases. This led to misunderstandings and unacceptable consequences, such as racially diverse Nazi imagery.

Raghavan stated that, as Google, they are sorry for what happened with this feature and they want Gemini to work properly for everyone. He emphasized that Google’s goal is to provide depictions of people of different ethnicities when users ask for images such as “football players” or “someone walking a dog.” However, he added that when Gemini is asked for images for a specific type of person or for people in certain cultural or historical contexts, users should receive an accurate answer appropriate to what they are asking.

As of February 22, Google stopped users from creating images of people with its Gemini AI tool. This happened just weeks after the tool’s image creation feature launched. Raghavan said that Google will continue to test and significantly improve Gemini AI’s rendering capabilities, after which they will re-enable the feature. Regarding “hallucinations”, a known difficulty in large language models, he stated that artificial intelligence can sometimes produce wrong things and that continuous improvements will be made in this regard.

This incident once again revealed the difficulties of artificial intelligence in producing accurate images that comply with historical and cultural sensitivities. Google states that it will continue to take steps to address such issues and ensure users’ trust.

Danish Kapoor