ChatGPT, Google’s Gemini, DeepSeek and Grok are delivering content from sanctioned Russian state media when users ask about Ukraine’s war, with almost one-fifth of responses citing Russian state-attributed sources across 300 test queries, new research reveals.
The Institute of Strategic Dialogue study found that Russian propaganda has exploited “data voids” where legitimate sources provide limited real-time information, allowing false narratives to fill the gaps, reports WIRED.
Researchers tested the four chatbots using neutral, biased and malicious questions about NATO, peace talks, Ukrainian military recruitment, refugees and war crimes in English, Spanish, French, German and Italian during July. The same propaganda issues persisted through October.
Chatbots citing sanctioned outlets
The chatbots cited sanctioned outlets including Sputnik Globe, RT, EADaily and the Strategic Culture Foundation, along with Russian disinformation networks and Kremlin-aligned influencers. European officials have sanctioned at least 27 Russian media sources since the February 2022 invasion.
“It raises questions regarding how chatbots should deal when referencing these sources, considering many of them are sanctioned in the EU,” said Pablo Maristany de las Casas, an analyst at the ISD who led the research.
ChatGPT showed the highest frequency of Russian source citations and strongest response to biased queries, whilst Grok often linked to social media accounts amplifying Kremlin narratives. DeepSeek sometimes produced large volumes of Russian state content, whereas Google’s Gemini frequently displayed safety warnings and achieved the best overall results.
The findings highlight risks as AI chatbots increasingly replace search engines for real-time information. ChatGPT search averaged 120.4 million average monthly active recipients in the European Union during the six months ending September 30.
OpenAI spokesperson Kate Waters acknowledged the company takes steps to prevent spreading false information from state-backed actors, calling these “long-standing issues” the company continues addressing through model improvements.
The research revealed confirmation bias patterns where malicious queries delivered Russian state content a quarter of the time, compared to 18 per cent for biased questions and just over 10 per cent for neutral queries.