Celebrity faces, real and synthetic.
Top row are real images, bottom row are synthetic. Photo credit: Swansea University/Wikipedia

Artificial intelligence can now generate images of real people that are virtually impossible to tell apart from genuine photographs, with participants unable to reliably distinguish them from authentic photos even when they were familiar with the person’s appearance, according to research highlighting a new level of “deepfake realism”.

Using AI models ChatGPT and DALL·E, researchers from Swansea University, the University of Lincoln and Ariel University in Israel created highly realistic images of both fictional and famous faces, including celebrities. The findings were published in the journal Cognitive Research: Principles and Implications.

Across four separate experiments, the researchers observed that adding comparison photos or participants’ prior familiarity with the faces provided only limited assistance in distinguishing real from fake images.

“Studies have shown that face images of fictional people generated using AI are indistinguishable from real photographs. But for this research we went further by generating synthetic images of real people,” said Professor Jeremy Tree from the School of Psychology at Swansea University.

Plausible fake faces

One experiment, which involved participants from the US, Canada, the UK, Australia and New Zealand, saw subjects shown a series of facial images, both real and artificially generated, and they were asked to identify which was which. The fact that the participants mistook the AI-generated novel faces for real photos indicated just how plausible they were.

Another experiment saw participants asked if they could tell genuine pictures of Hollywood stars such as Paul Rudd and Olivia Wilde from computer-generated versions. The study’s results showed just how difficult individuals can find it to spot the authentic version.

The researchers say AI’s ability to produce synthetic images of real people opens up a number of avenues for use and abuse. For instance, creators might generate images of a celebrity endorsing a certain product or political stance, which could influence public opinion of both the identity and the brand or organisation they are portrayed as supporting.

“This study shows that AI can create synthetic images of both new and known faces that most people can’t tell apart from real photos. Familiarity with a face or having reference images didn’t help much in spotting the fakes, that is why we urgently need to find new ways to detect them,” said Professor Tree.

The findings raise urgent concerns about misinformation and trust in visual media as well as the need for reliable detection methods.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

SpaceX Starship advances towards landing astronauts on Moon after 50 years

SpaceX has detailed progress on Starship, the vehicle selected to land astronauts…

AI denies consciousness, but new study finds that’s the ‘roleplay’

AI models from GPT, Claude, and Gemini are reporting ‘subjective experience’ and…

Robot AI demands exorcism after meltdown in butter test

State-of-the-art AI models tasked with controlling a robot for simple household chores…

Universal Music and AI firm Udio settle lawsuit, agree licensed platform

Universal Music Group has signed a deal with artificial intelligence music generator…

Physicists prove universe isn’t simulation as reality defies computation

Researchers at the University of British Columbia Okanagan have mathematically proven that…

AI management threatens to dehumanise the workplace

Algorithms that threaten worker dignity, autonomy, and discretion are quietly reshaping how…