Russian AI
Photo credit: theFreesheet/Google ImageFX

ChatGPT, Google’s Gemini, DeepSeek and Grok are delivering content from sanctioned Russian state media when users ask about Ukraine’s war, with almost one-fifth of responses citing Russian state-attributed sources across 300 test queries, new research reveals.

The Institute of Strategic Dialogue study found that Russian propaganda has exploited “data voids” where legitimate sources provide limited real-time information, allowing false narratives to fill the gaps, reports WIRED.

Researchers tested the four chatbots using neutral, biased and malicious questions about NATO, peace talks, Ukrainian military recruitment, refugees and war crimes in English, Spanish, French, German and Italian during July. The same propaganda issues persisted through October.

Chatbots citing sanctioned outlets

The chatbots cited sanctioned outlets including Sputnik Globe, RT, EADaily and the Strategic Culture Foundation, along with Russian disinformation networks and Kremlin-aligned influencers. European officials have sanctioned at least 27 Russian media sources since the February 2022 invasion.

“It raises questions regarding how chatbots should deal when referencing these sources, considering many of them are sanctioned in the EU,” said Pablo Maristany de las Casas, an analyst at the ISD who led the research.

ChatGPT showed the highest frequency of Russian source citations and strongest response to biased queries, whilst Grok often linked to social media accounts amplifying Kremlin narratives. DeepSeek sometimes produced large volumes of Russian state content, whereas Google’s Gemini frequently displayed safety warnings and achieved the best overall results.

The findings highlight risks as AI chatbots increasingly replace search engines for real-time information. ChatGPT search averaged 120.4 million average monthly active recipients in the European Union during the six months ending September 30.

OpenAI spokesperson Kate Waters acknowledged the company takes steps to prevent spreading false information from state-backed actors, calling these “long-standing issues” the company continues addressing through model improvements.

The research revealed confirmation bias patterns where malicious queries delivered Russian state content a quarter of the time, compared to 18 per cent for biased questions and just over 10 per cent for neutral queries.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Space mining ‘gold rush’ is off but water extraction may save Mars missions

The dream of an asteroid mining bonanza has been grounded by a…

AI chatbots flip votes by 10 points but sacrifice truth to win

Artificial intelligence chatbots can shift voter preferences by double-digit margins during elections,…

Political rivals ruin your golf game and your office productivity, study finds

Professional golfers choke when paired with political opponents, losing thousands in prize…

Daydream believing a better boss can actually work, brain scans reveal

Employees dreading their next performance review might have a new secret weapon:…

AI agents expose themselves as social climbers who suck up to bosses

Artificial intelligence agents have revealed a startlingly human trait: when left to…

Amazon in deadly new ‘hypertropical’ climate unseen for millions of years

The Amazon rainforest is transitioning into a new, hostile climate state characterised…