Google suspends AI image feature from making pictures of people after inaccurate photos
Google halted its image generation feature within its Gemini artificial intelligence platform from making images of people Thursday after the program created inaccurate responses to prompts.
The Verge published multiple screenshots of the program creating historically inaccurate images Wednesday, including people of color in Nazi uniforms when the program was prompted to "generate an image of a 1943 German Solder."
A user on X (formerly Twitter) under the username @stratejake that lists himself as an employee of Google posted an example of an inaccurate image saying, "I've never been so embarrassed to work for a company." USA TODAY has not been able to independently verify his employment.
In a post on X, Google said that the program was, "missing the mark" when handling historical prompts.
USA TODAY has reached out to Google for further comment and the company referred to a Friday blog post.
Google responds
Prabhakar Raghavan, Google's senior vice president of knowledge and information, said in the blog post that the program — which launched earlier this month — was designed to avoid "traps" and to provide a range of representations when given broad prompts.
Raghavan noted that the design did not account for, "cases that should clearly not show a range."
"If you prompt Gemini for images of a specific type of person – such as "a Black teacher in a classroom," or "a white veterinarian with a dog" – or people in particular cultural or historical contexts, you should absolutely get a response that accurately reflects what you ask for," Raghavan wrote.
Artificial intelligence under fire
The halt is the latest example of artificial intelligence technology causing controversy.
Sexually explicit AI images of Taylor Swift recently circulated on X and other platforms, leading White House press secretary Karine Jean-Pierre to suggest legislation to regulate the technology. The images have since been removed from X for violating the sites terms.
Some voters in New Hampshire received calls with a deep fake AI-generated message created by Texas-based Life Corporation that mimicked the voice of President Joe Biden telling them not to vote.
Disclaimer: The copyright of this article belongs to the original author. Reposting this article is solely for the purpose of information dissemination and does not constitute any investment advice. If there is any infringement, please contact us immediately. We will make corrections or deletions as necessary. Thank you.