MrBeast
MrBeast, one of the world’s most biggest social media influencers. Photo credit: YouTube

Algorithms used to identify influential people in social networks, including those that determine which content creators brands should partner with for marketing campaigns, disseminate information inequitably and potentially exacerbate existing social inequalities, according to research published in PNAS Nexus.

Vedran Sekara and colleagues examined how influence maximisation algorithms, widely used in everything from public health campaigns to social media marketing, select individuals to spread messages. The researchers found these algorithms create information gaps where certain groups consistently miss important information while well-connected individuals receive it repeatedly.

The study has implications for how brands select influencers for marketing campaigns and how public health organisations spread vital information. When companies use algorithms to identify which social media personalities to partner with for product promotions, those same algorithms may systematically exclude large portions of potential customers from seeing the message.

The researchers used the independent cascade model on synthetic and diverse real-world social networks, including connections between households in multiple villages, political bloggers, Facebook friendships and scientific collaborations. Across 10 real-world social networks, up to 80 per cent of individuals receive information less frequently compared to if it were distributed randomly.

Low-connected individuals lose out

The study revealed four commonly used influence maximisation methods consistently disadvantage low-connected and peripheral individuals. A predictive model based on network structure could determine with 97.4 per cent accuracy whether someone would be left out, demonstrating the algorithms systematically favour certain network positions over others.

The researchers wrote that the issue lies with the problem statement and choice of objective function, with algorithmic bias created by focusing solely on optimising reach without considering information equity. They noted that not receiving information has real-world consequences, citing examples where individuals were left untreated in mass drug administration campaigns not due to lack of medicine but because they never received information about the campaign.

To address the problem, the researchers devised a multiobjective algorithm designed to maximise both spread and fairness. The method for choosing which influencers to target results in six to 10 per cent fewer vulnerable individuals with a negligible effect on overall reach.

For the network of political blogs, a one per cent decrease in message reach resulted in a 12 per cent decrease in vulnerable individuals. For email communication, the decrease was 18 per cent, rising to 42 per cent for collaboration networks and 65 per cent for online friendships on Facebook. Accepting a five per cent reduction in reach yielded up to 71 per cent fewer vulnerable individuals for online friendships.

The researchers concluded that the pervasive usage of influence maximisation algorithms in information diffusion and online social networks can create large fractures in the social fabric of societies. They emphasised the need to understand if such algorithms are equitable, quantify the level of inequality and propose alternatives that balance potential reach and equity.

The findings suggest that as brands increasingly rely on algorithms to select influencers for marketing campaigns, they may inadvertently create echo chambers where messages only reach those already well-connected, potentially limiting market reach and reinforcing existing social divisions.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Political misinformation key reason for US divorces and breakups, study finds

Political misinformation or disinformation was the key reason for some US couples’…

Pinterest launches user controls to reduce AI-generated content in feeds

Pinterest has introduced new controls allowing users to adjust the amount of…

Meta launches ad-free subscriptions after ICO forces compliance changes

Meta will offer UK users paid subscriptions to use Facebook and Instagram…

Wikimedia launches free AI vector database to challenge Big Tech dominance

Wikimedia Deutschland has launched a free vector database enabling developers to build…

Film union condemns AI actor as threat to human performers’ livelihoods

SAG-AFTRA has condemned AI-generated performer Tilly Norwood as a synthetic character trained…

Mistral targets enterprise data as public AI training resources dry up

Europe’s leading artificial intelligence startup Mistral AI is turning to proprietary enterprise…

Wong warns AI nuclear weapons threaten future of humanity at UN

Australia’s Foreign Minister Penny Wong has warned that artificial intelligence’s potential use…

Anthropic’s Claude Sonnet 4.5 detects testing scenarios, raising evaluation concerns

Anthropic’s latest AI model recognised it was being tested during safety evaluations,…