During Movember this year, the charity launched a clever campaign with Google Gemini. If you upload a photo, get an AI mustache, and post it, Google will donate £11 to Movember. A cause, a seemingly simple activation.
Nowhere on the campaign page was there any mention of what would happen to your photos. According to Google’s privacy policy for the free Gemini app, uploaded photos are stored for three years, reviewed in some cases by human reviewers, and used to train AI models. When I asked Movember about this, their answer was basically that their privacy policy is available to anyone who wants to see it.
This campaign raises money for men’s health. It also provides facial data to Google to train its AI. Only the first part is mentioned.
Unfortunately, Mr. Movember is far from alone. This is a sector-wide issue, but we are not talking about it openly enough. We are at a crossroads in AI imagery, and the philanthropic sector has an opportunity to lead the way. We could be the department that gets this right.
Transparency issues
I’ve been speaking in the philanthropic field for years about dignity, consent, and not exploiting people’s images. The burden of informed consent should not be on participants to find privacy policies buried in app settings.
We can support important causes and be transparent about how we use your data. These are not mutually exclusive. A line from the campaign page. Checkbox. Let people know that their photos serve both purposes, and give them the option to opt-in.
This is important because different standards apply to charities. We exist because we care about people and justice. Failure to provide basic transparency undermines the trust that makes our work possible.
Serious issues with AI-generated images
Transparency issues are just the surface. What most charities don’t understand about AI-generated images is deeper.
Not all charities use AI-generated images to create exploitative content. Many people use this for abstract concepts, design elements, or illustrations that don’t depict real people. That’s a different conversation.
Worryingly, a growing number of charities are using AI to generate provocative and stereotypical images of people in distress. Homeless on the street corner. children in poverty. People who are struggling with their mental health. A common excuse is that you’re “protecting your identity” or avoiding the complexities of working with real people. That excuse isn’t good enough.
These tools are trained on large datasets of existing images. When AI generates images of “homeless people” and “children in poverty,” it draws on decades of stereotypical and often racist images. AI is not creating a neutral image. It regurgitates and reinforces the worst stereotypes that we have spent years trying to break free of.
We don’t actually know what these datasets contain. How many of the images used to train these tools were scraped without their consent? How many photographers, illustrators, and creators had their work photographed without permission or payment?
No matter how careful you are with the prompts, you’ll be working with a biased data set. And you, the human filling in those prompts, bring your own biases as well. This combination can create images that reflect existing biases, even if you think you’re being thoughtful.
There are also environmental costs. Generating AI images requires huge amounts of electricity and has a huge carbon footprint. For sectors focused on climate justice, it’s worth considering. And what happens to the livelihoods of photographers, illustrators, and other creators who have built their careers on telling stories with integrity? Everything is affected.
The crisis of trust and the way forward
If charities start using AI-generated images without disclosing them, the practice will become the norm. This undermines the already fragile trust between charities and the public.
We operate in an environment where many people do not fully understand how philanthropy works and already view the sector with skepticism. Adding unpublished AI-generated content to the mix only widens the gap further.
But charities have a real opportunity to lead the way in the ethical use of AI. We could be the department that demonstrates how to leverage this technology while staying true to our values.
If you’re using AI-generated images in your charity content without labeling them, ask yourself why. If you can’t be transparent about it, that’s your answer.
This is also an opportunity for charities to embrace authenticity. The organizations that grow strongest are not the ones with the most sophisticated, algorithmically perfect content. They’re making real connections with real people and real stories.
Charities will use AI. That’s the reality. If you use this, you need to be transparent about your data usage, as the Movember example shows. If you use AI-generated images, label them. Ask yourself why you are using it. Ask yourself if it’s worth it. Or is being authentic more effective? The charity sector must lead with values. Now is your chance to prove it.
I am conducting research on AI ethics in the field of philanthropy. We believe this requires a sector-wide conversation and clear guidelines. I Conducting a survey We’d like to hear from you to understand how charities are currently thinking about images generated by AI.
Emma Bracegirdle is the founder of Saltways, a UK-based video production company specializing in ethical film production for charities and non-profit organizations.
