Echo chamber.
Photo credit: Prachatai/Flickr

Social media algorithms are currently designed to show you exactly what you want to see, creating a comfortable loop of validation that often hardens beliefs rather than challenging them.

But new research from the University of Rochester suggests that this “echo chamber” effect is not an inevitable fact of online life — it is a design choice that can be fixed with a simple tweak: adding a little randomness.

In a study published in IEEE Transactions on Affective Computing, an interdisciplinary team of computer scientists and political scientists found that introducing random variations into news feeds significantly weakens the feedback loops that drive polarisation.

“The most dangerous feeds are not the ones that upset us, but the ones that convince us we are always right,” says Adiba Mahbub Proma, a PhD student and the study’s first author.

The randomness cure

The researchers recruited 163 participants to use simulated social media platforms. Some users were given feeds modelled on traditional algorithms that prioritise engagement and agreement, while others saw feeds that incorporated a degree of randomness.

The study clarified that “randomness” does not mean filling a feed with nonsense or irrelevant content. Instead, it involves loosening the strict logic that dictates “show me more of what I already agree with”.

In the experiment, users were periodically exposed to opinions and connections they had not explicitly chosen. The results showed that when the algorithm stopped perfectly catering to a user’s existing biases, their belief rigidity decreased.

“Across a series of experiments, we find that what people see online does influence their beliefs, often pulling them closer to the views they are repeatedly exposed to,” says Proma. “But when algorithms incorporate more randomisation, this feedback loop weakens. Users are exposed to a broader range of perspectives and become more open to differing views.”

A design choice

The team, led by Professor Ehsan Hoque, argues that current recommendation systems drive users into echo chambers by making divisive content more attractive.

However, the solution does not require eliminating personalisation entirely. The researchers recommend a design shift that introduces variety while still allowing users to maintain control over their feeds.

“If your feed feels too comfortable, that might be by design,” Proma warns. “Seek out voices that challenge you.”

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Digital sovereignty: Why 2026 is Europe’s make-or-break year for sovereign cloud

theFreesheet is the official media partner for Manchester Edge & Digital Infrastructure…

Medical AI fails in real-world clinics due to ‘contextual errors’

Despite the massive hype surrounding artificial intelligence in healthcare, a vast gap…

Study reveals why humans blindly follow ‘influencers’ and how inequality forms

For decades, scientists believed that early hunter-gatherer societies were largely egalitarian, with…