Scientists have successfully intercepted and reshaped live social media feeds using ad-blocker-style technology, proving that algorithms can be manipulated externally to reduce political hostility without the platform’s permission.
A multidisciplinary team led by Stanford University developed a browser extension that functions like an ad blocker, intercepting and reshaping the web feed of Elon Musk’s social media platform X in real time. By leveraging large-language-model-based classifiers to evaluate and reorder posts based on their content, the researchers demonstrated that users can take control of their own algorithms.
The tool was deployed in a field experiment involving 1,256 participants during the 2024 US election. The technology scanned posts for anti-democratic attitudes and partisan animosity, such as advocating for violence or jailing supporters of the opposing party.
Crucially, the tool did not delete content. Instead, it re-ordered the feed to move incendiary posts lower in the user’s content stream.
“Social media algorithms directly impact our lives, but until now, only the platforms had the ability to understand and shape them,” said Michael Bernstein, a professor of computer science in Stanford’s School of Engineering and the study’s senior author. “We have demonstrated an approach that lets researchers and end users have that power.”
Hostile content
The study, published in Science, found that participants exposed to downranked hostile content showed significantly warmer feelings toward the opposing political party. The effect was bipartisan, holding true for people who identified as liberals or conservatives.
Participants who had negative content downranked saw their attitudes improve by two points on a 100-point scale. The researchers note this is equivalent to the estimated change in attitudes that has occurred among the general US population over a period of three years.
“When the participants were exposed to less of this content, they felt warmer toward the people of the opposing party,” said Tiziano Piccardi, the study’s first author who is now an assistant professor of computer science at Johns Hopkins University. “When they were exposed to more, they felt colder.”
The technology relies on a middleware approach that sidesteps the challenges of studying proprietary algorithms. Because platform operators alone control how their algorithms behave, testing theories about polarisation has historically proven exceptionally difficult.
By creating a tool that users install voluntarily, the team bypassed the need for platform cooperation. The seamless, web-based tool re-orders content in a matter of seconds, preventing the “emotional hijacking” that occurs when users encounter polarising content.
The findings suggest that algorithmically mediated exposure to political hostility shapes both affective polarisation and moment-to-moment emotional responses. Participants reported decreased feelings of anger and sadness when the tool was active.
The team has made the code for the current tool available so that other researchers and developers can create ranking systems independent of a social media platform’s algorithm.