Charity advertising.
Amnesty International chose to use an AI-generated image to depict protests and police brutality in Columbia to protect protesters. Photo credit: Amnesty International

Tempted by the promise of faster, cheaper campaign materials, many major charities have started using artificial intelligence to generate imagery. But a new study reveals this high-tech shortcut to empathy is actively backfiring, distracting donors and eroding the fundamental bond of public trust.

According to a new report from the University of East Anglia (UEA), when humanitarian organisations use AI-generated images, the actual cause effectively disappears from the conversation.

The report, titled Artificial Authenticity, analysed 171 AI-generated images and more than 400 public comments surrounding campaigns from 17 major organisations, including Amnesty International, the World Health Organisation (WHO), and WWF.

The distraction effect

The researchers found that introducing AI fundamentally reshapes how the public engages with a charity. Instead of focusing on the crisis at hand, audiences become obsessed with the technology itself.

Of the comments analysed by the UEA team:

  • 141 focused heavily on AI ethics and authenticity concerns.
  • 122 critiqued the technical execution and visual quality of the fake images.
  • Only 80 (less than 20 per cent) actually engaged with the humanitarian issue being promoted.

“Charities exist because people care about other people. The moment when audiences start questioning whether what they are seeing is real, the emotional connection that drives support is put at risk,” explained co-author David Girling, from UEA’s School of Global Development. “The debate about the ethics of AI is increasingly polarised. AI is not inherently wrong, but if it begins to overshadow the human story at the heart of charitable work, organisations could lose far more in trust than they gain in efficiency.”

Transparency is not a shield

The study noted that nearly 70 per cent of the AI images analysed were designed to appear photorealistic, with poverty being the dominant theme.

While 85 per cent of these images were appropriately captioned as AI-generated, this transparency did not protect the organisations from public backlash. Furthermore, the study found significant anger over “message-medium misalignment”.

For example, environmental organisations like WWF Denmark faced heavy criticism for using energy-intensive AI tools to promote sustainability — a move climate-conscious commenters labelled as “ecocidal”.

Privacy vs. authenticity

For some charities, using mock AI visuals is actually viewed as a safeguarding measure. Generating an image of a fictional starving child, for instance, reduces the need to stick a camera in the face of a real, vulnerable person, saving them from further trauma and exploitation.

However, the UEA study shows that donors frequently reject these “fake” images, prioritising their own psychological need for an “authentic witness” over a beneficiary’s right to privacy.

Ultimately, the researchers advise charities to tread carefully. “The future of charity storytelling will not hinge on technological capability alone,” said co-author Deborah Adesina. “It will depend on whether organisations can maintain legitimacy, transparency and moral coherence in an environment where audiences are increasingly media literate and increasingly sceptical.”

If communications teams do opt to use generative AI, the researchers recommend co-creating the imagery directly with local communities to ensure it is accurate and culturally appropriate.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Bedtime doomscrolling costing millions of Americans a good night’s sleep

Millions of Americans are actively sacrificing a good night’s rest for one…

Beyond the box: Designing digital infrastructure for human experience

theFreesheet is the official media partner for Manchester Edge & Digital Infrastructure…

40,000-year-old Stone Age carvings rival early writing in complexity

Long before the invention of the alphabet, our early ancestors were carefully…

Humans beat AI at spotting deepfake videos but fail entirely with photos

As artificial intelligence gets better at generating fake imagery, a new study…