In response to recent controversies surrounding the accuracy of historical depictions generated by its Gemini AI, Google has announced the temporary suspension of Gemini’s image generation feature for people.
As Google works to enhance the historical precision of its AI outputs, the decision to pause image generation of people within Gemini was disclosed through a statement posted on social media platform X by Google.
The company acknowledged the existence of “recent issues” related to historical inaccuracies and assured users that efforts are underway to address these concerns. Google emphasized its focus on releasing an improved version of the technology in the near future.
Gemini, Google’s flagship generative AI suite of models, was introduced earlier this month as part of the company’s endeavor to compete with industry counterparts such as OpenAI and Microsoft’s Copilot. The image generation tool functions by producing a variety of images based on text input.
Recent instances of Gemini generating incongruous images of historical figures, including the US Founding Fathers portrayed as individuals of different races, have led to complaint and ridicule across social media platforms.
Users showed concern over the erasure of historical accuracy and the perpetuation of stereotypes within the AI-generated images.
Venture capitalist Michael Jackson, based in Paris, voiced his disapproval of Google’s AI on LinkedIn, denouncing it as “a nonsensical DEI parody” (DEI referring to Diversity, Equity, and Inclusion).
In response to the growing concerns, Google publicly acknowledged the inaccuracies in some historical image depictions generated by Gemini. The company explained its ongoing efforts to rectify these depictions promptly.
Despite Gemini’s capacity to generate diverse images catering to global users, Google acknowledged the need for improvement, particularly in historical representations.
Generative AI tools like Gemini operate based on training data and various parameters, including model weights. While such tools offer significant advancements in AI capabilities, they have faced criticism for producing biased outputs, perpetuating stereotypes, and inaccurately representing historical figures.
Google’s previous AI image classification tool faced backlash in 2015 when it misclassified Black men as gorillas. Despite promises to address the issue, Google’s solution merely involved blocking the technology from recognizing gorillas altogether, as reported by Wired in subsequent years.