University College Cork
Pictured left to right are UCC School of Applied Psychology researchers Dr Conor Linehan; John Twomey, lead researcher of Deepfakes/Real Harms; and Dr Gillian Murphy. Photo credit: University College Cork

A new 10-minute online course has been proven to stop people from creating and sharing non-consensual deepfake images by teaching them empathy for the victims.

Researchers at University College Cork (UCC) have launched the world’s first evidence-based tool designed to curb the spread of AI-generated explicit imagery.

The intervention, called Deepfakes/Real Harms, is being released amidst the ongoing controversy surrounding the Grok AI “undressing” scandal, as pressure mounts on regulators to confront the rapid spread of synthetic abuse.

While legislation is crucial, the UCC team argues that educating users to stop them from engaging with the technology in the first place must be a central part of the solution.

Don’t blame the bot

Lead researcher John Twomey from the UCC School of Applied Psychology emphasised that while AI tools like Grok facilitate the abuse, the responsibility lies with the human user.

“There is a tendency to anthropomorphise AI technology – blaming Grok for creating explicit images and even running headlines claiming Grok ‘apologised’ afterwards,” Twomey said.

“But human users are the ones deciding to harass and defame people in this manner. Our findings suggest that educating individuals about the harms of AI identity manipulation can help to stop this problem at source.”

Busting the myths

The researchers discovered that people who watch, share, or create non-consensual intimate imagery – often mistakenly called “deepfake pornography” – are usually driven by specific false beliefs.

Common myths include the idea that the images are only harmful if viewers believe they are real, or that public figures are “legitimate targets” for this kind of abuse.

To combat this, the team developed a free, 10-minute online intervention that encourages users to reflect on the real-world impact of these images.

Tested on more than 2,000 participants worldwide, the tool was found to significantly reduce belief in these myths. Crucially, it also reduced users’ intentions to engage with harmful deepfake technology – an effect that persisted weeks later.

Changing the language

The project also calls for a change in how society talks about the issue. Dr. Gillian Murphy, the project’s Principal Investigator, argues that the term “deepfake pornography” is dangerous.

“The word ‘pornography’ generally refers to an industry where participation is consensual. In these cases, there is no consent at all,” Murphy explained.

“What we are seeing is the creation and circulation of non-consensual synthetic intimate imagery, and that distinction matters because it captures the real and lasting harm experienced by victims of all ages around the world.”

Pause button for the internet

Feedback from users suggests the tool works by offering a non-judgmental space for reflection rather than simply lecturing them.

“It didn’t come across as judgmental or preachy – it was more like a pause button,” one participant reported. “Instead of just pointing fingers, it gave you a chance to reflect and maybe even empathise a little, which can make the message stick longer than just being told, ‘Don’t do this’”.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Dog food ‘carbon pawprint’ can carry higher climate cost than owners’ diets

Feeding the family dog premium, meat-rich steaks and wet food may cause…

Artificial intelligence predicts 100 diseases from a single night’s sleep

A new artificial intelligence model can forecast a person’s risk of developing…

Brands urged to monitor Bluesky and Mastodon for ‘unfiltered’ consumer truth

Companies seeking honest feedback on their products should look beyond Facebook and…

Scientists find ‘brake’ in the brain that stops us starting stressful tasks

We all know the feeling: staring at a tax return or a…