Deepfake videos.
Photo credit: Ministerie van Buitenlandse Zaken/Flickr

As artificial intelligence gets better at generating fake imagery, a new study from the University of Florida reveals a sharp divide in our ability to detect it. While machines are vastly superior at spotting fake photos, the human brain still holds a significant advantage when those fakes start moving.

In the study published in the journal Cognitive Research: Principles and Implications, psychologists and computer scientists tested thousands of participants and detection algorithms against hundreds of real and fake media samples.

The results highlighted a stark contrast between still and moving media:

  • Still photos: AI programs proved up to 97 per cent accurate at detecting deepfake faces in still pictures. Human participants, however, performed no better than chance.
  • Moving videos: The tables turned completely for video content. Automated algorithms dropped to chance levels, while humans correctly identified real and fake videos about two-thirds of the time.

According to the researchers, human participants were able to pick up on subtle inconsistencies in movement, facial expressions, and timing, dynamic cues that detection algorithms struggled to interpret.

“I think we were all a little shocked to see humans outperform AI on videos,” said Dr. Brian Cahill, a professor of psychology at UF and co-author of the study. “But the videos have more cues, it’s a richer context. There’s more stuff for the human brain to pick up on”.

The psychology of detection

The research team — which included Dr. Didem Pehlivanoglu, Dr. Mengdi Zhu, and senior author Dr. Natalie Ebner — also discovered that a person’s state of mind and background skills played a major role in their success.

  • Analytical skills: Unsurprisingly, people who scored higher in analytical thinking and internet skills were noticeably better at detecting AI-generated videos.
  • The mood factor: Conversely, participants who reported being in a better mood actually performed worse at spotting fakes. The researchers suggest this may reflect a greater level of baseline trust when someone is feeling positive.

The researchers cautioned that the study tested specific types of faces and videos under controlled conditions, which may not perfectly reflect the sheer complexity of real-world online content. Because deepfake technology is evolving so rapidly, the balance of power between human and machine detection could easily shift again in the near future.

Ultimately, determining truth online will require increasing vigilance from everyone.

“We don’t necessarily need to be able to detect everything ourselves,” Dr. Zhu noted. “But we do need to stay alert, question what we see and look for evidence to support it”.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Massive AI study uncovers the secret GLP-1 side effects hidden on Reddit

Millions of patients are flocking to GLP-1 weight loss injections, but artificial…

Why digital tears and online outrage fail to win modern political arguments

Scrolling through your social media feed today often feels like navigating a…

One in four Americans now consult AI chatbots for medical advice

Millions of desperate patients are quietly abandoning the waiting room for a…

Alarming new US survey shows half of patients rely on AI for medical choices

Across the United States, a dangerous new trend is emerging. Millions of…

Global gambling firms rush to adopt AI despite severe lack of safety controls

The global gambling industry is racing to integrate artificial intelligence into its…

Tracking how war and energy policies dimmed night lights of Europe

While human civilisation is glowing brighter than ever before, the lights across…