Just five minutes of training can significantly improve a person’s ability to identify artificial intelligence-generated fake faces, according to new research led by the University of Reading.
The study found that without this brief training, most people performed significantly worse than random guessing when trying to distinguish real human faces from those created by the advanced software StyleGAN3.
The difficulty of the task echoes findings from a separate study, which revealed that AI-generated images of celebrities are now virtually indistinguishable from authentic photographs. That research, led by Swansea University, found that even prior familiarity with a famous face provided limited assistance in distinguishing real images from fakes.
In the Reading study, scientists from the universities of Reading, Greenwich, Leeds and Lincoln tested 664 participants on their ability to spot deepfake images. In the initial tests, participants with typical abilities correctly identified fake faces just 31 per cent of the time — far below the 50 per cent accuracy expected from blind guessing. Even “super-recognisers,” people with exceptional natural face recognition skills, only managed a 41 per cent success rate.
Unusual hair patterns
However, accuracy improved markedly after participants underwent a short training procedure. The training highlighted common rendering mistakes made by AI, such as incorrect numbers of teeth or unusual hair patterns.
Following this guidance, super-recognisers achieved 64 per cent accuracy, while typical participants improved to 51 per cent.
“Computer-generated faces pose genuine security risks,” said Dr Katie Gray, lead researcher at the University of Reading. “They have been used to create fake social media profiles, bypass identity verification systems and create false documents. The faces produced by the latest generation of artificial intelligence software are extremely realistic. People often judge AI-generated faces as more realistic than actual human faces.”
The study, published in Royal Society Open Science, noted that StyleGAN3 represents a significant challenge compared to older software because it produces hyper-realistic images that often fool the human eye more effectively than real faces.
Gray emphasised the practical value of the findings for digital security.
“Our training procedure is brief and easy to implement,” she said. “The results suggest that combining this training with the natural abilities of super-recognisers could help tackle real-world problems, such as verifying identities online.”