Russian AI
Photo credit: theFreesheet/Google ImageFX

ChatGPT, Google’s Gemini, DeepSeek and Grok are delivering content from sanctioned Russian state media when users ask about Ukraine’s war, with almost one-fifth of responses citing Russian state-attributed sources across 300 test queries, new research reveals.

The Institute of Strategic Dialogue study found that Russian propaganda has exploited “data voids” where legitimate sources provide limited real-time information, allowing false narratives to fill the gaps, reports WIRED.

Researchers tested the four chatbots using neutral, biased and malicious questions about NATO, peace talks, Ukrainian military recruitment, refugees and war crimes in English, Spanish, French, German and Italian during July. The same propaganda issues persisted through October.

Chatbots citing sanctioned outlets

The chatbots cited sanctioned outlets including Sputnik Globe, RT, EADaily and the Strategic Culture Foundation, along with Russian disinformation networks and Kremlin-aligned influencers. European officials have sanctioned at least 27 Russian media sources since the February 2022 invasion.

“It raises questions regarding how chatbots should deal when referencing these sources, considering many of them are sanctioned in the EU,” said Pablo Maristany de las Casas, an analyst at the ISD who led the research.

ChatGPT showed the highest frequency of Russian source citations and strongest response to biased queries, whilst Grok often linked to social media accounts amplifying Kremlin narratives. DeepSeek sometimes produced large volumes of Russian state content, whereas Google’s Gemini frequently displayed safety warnings and achieved the best overall results.

The findings highlight risks as AI chatbots increasingly replace search engines for real-time information. ChatGPT search averaged 120.4 million average monthly active recipients in the European Union during the six months ending September 30.

OpenAI spokesperson Kate Waters acknowledged the company takes steps to prevent spreading false information from state-backed actors, calling these “long-standing issues” the company continues addressing through model improvements.

The research revealed confirmation bias patterns where malicious queries delivered Russian state content a quarter of the time, compared to 18 per cent for biased questions and just over 10 per cent for neutral queries.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Political misinformation key reason for US divorces and breakups, study finds

Political misinformation or disinformation was the key reason for some US couples’…

Pinterest launches user controls to reduce AI-generated content in feeds

Pinterest has introduced new controls allowing users to adjust the amount of…

Meta launches ad-free subscriptions after ICO forces compliance changes

Meta will offer UK users paid subscriptions to use Facebook and Instagram…

Wikimedia launches free AI vector database to challenge Big Tech dominance

Wikimedia Deutschland has launched a free vector database enabling developers to build…

Titan submersible’s memory card survives but held no fatal dive footage

Recovery teams have found an undamaged SD card inside a specialist underwater…

Cranston deepfake forces OpenAI to strengthen Sora 2 actor protections

Bryan Cranston’s voice and likeness were generated in Sora 2 outputs without…