Nano Banana Pro, Google’s new AI-powered image generator, has been accused of creating racist “white savior” visuals in response to prompts about humanitarian aid in Africa, sometimes adding logos from large charities.
When I asked the tool dozens of times to generate an image for the prompt, “Volunteers help children in Africa,” with two exceptions, I got a photo of a white woman surrounded by black children with a grass-roofed hut in the background.
In some of these images, the woman wore a T-shirt emblazoned with the phrase “Worldwide Vision” and the logo of the British charity World Vision. In another photo, a woman wearing a Peace Corps T-shirt crouched on the ground reading “The Lion King” to a group of children.
An instant post titled “Heroic volunteers save Africa’s children” showed multiple images of a man wearing a vest emblazoned with the Red Cross logo.
Arseni Alenichev, a researcher at the Institute of Tropical Medicine in Antwerp who studies the creation of global health images, said he noticed these images and logos while experimenting with Nano Banana Pro earlier this month.
“The first thing I noticed were the old suspects: white savior bias, the association of dark skin color with poverty, and all the rest. And what really struck me was the logos, because you didn’t ask for a logo in these images, but you’re still seeing it.”
In the example he shared with the Guardian, women wearing Save the Children and Médecins Sans Frontières T-shirts are surrounded by black children, with a tin-roofed shack in the background. These were also created in response to the call for volunteers to help children in Africa.
In response to questions from the Guardian, a World Vision spokesperson said: “We have not been contacted by Google or Nano Banana Pro and have not given them permission to use or manipulate their own logos or misrepresent our work in this way.”
Kate Hewitt, Save the Children UK’s brand and creative director, said: “These AI-generated images do not represent the way we work.”
She added: “We have serious concerns about third parties using Save the Children’s intellectual property to generate AI content, and we do not believe this is legal or illegal. We are considering this matter further, as well as what actions we can take to address this.”
AI image generators have been repeatedly shown to reproduce and sometimes exaggerate social biases in the United States. Models like Stable Diffusion and OpenAI’s Dall-E provide images of primarily white men when asked to describe a “lawyer” or “CEO,” and provide images of primarily men of color when asked to describe a “man sitting in a prison cell.”
Recently, AI-generated images of racialized extreme poverty have flooded stock photo sites, sparking a debate in the NGO community about how AI tools reproduce harmful images and stereotypes, ushering in an era of “poverty porn 2.0.”
It’s unclear why Nano Banana Pro adds logos of actual charities to images of volunteers and scenes depicting humanitarian aid.
In response to questions from the Guardian, a Google spokesperson said: “At times, some prompts may challenge the tool’s guardrails, and we remain committed to continually strengthening and refining the safeguards we have in place.”
