As artificial intelligence gets better at generating fake imagery, a new study from the University of Florida reveals a sharp divide in our ability to detect it. While machines are vastly superior at spotting fake photos, the human brain still holds a significant advantage when those fakes start moving.
In the study published in the journal Cognitive Research: Principles and Implications, psychologists and computer scientists tested thousands of participants and detection algorithms against hundreds of real and fake media samples.
The results highlighted a stark contrast between still and moving media:
- Still photos: AI programs proved up to 97 per cent accurate at detecting deepfake faces in still pictures. Human participants, however, performed no better than chance.
- Moving videos: The tables turned completely for video content. Automated algorithms dropped to chance levels, while humans correctly identified real and fake videos about two-thirds of the time.
According to the researchers, human participants were able to pick up on subtle inconsistencies in movement, facial expressions, and timing, dynamic cues that detection algorithms struggled to interpret.
“I think we were all a little shocked to see humans outperform AI on videos,” said Dr. Brian Cahill, a professor of psychology at UF and co-author of the study. “But the videos have more cues, it’s a richer context. There’s more stuff for the human brain to pick up on”.
The psychology of detection
The research team — which included Dr. Didem Pehlivanoglu, Dr. Mengdi Zhu, and senior author Dr. Natalie Ebner — also discovered that a person’s state of mind and background skills played a major role in their success.
- Analytical skills: Unsurprisingly, people who scored higher in analytical thinking and internet skills were noticeably better at detecting AI-generated videos.
- The mood factor: Conversely, participants who reported being in a better mood actually performed worse at spotting fakes. The researchers suggest this may reflect a greater level of baseline trust when someone is feeling positive.
The researchers cautioned that the study tested specific types of faces and videos under controlled conditions, which may not perfectly reflect the sheer complexity of real-world online content. Because deepfake technology is evolving so rapidly, the balance of power between human and machine detection could easily shift again in the near future.
Ultimately, determining truth online will require increasing vigilance from everyone.
“We don’t necessarily need to be able to detect everything ourselves,” Dr. Zhu noted. “But we do need to stay alert, question what we see and look for evidence to support it”.