Artificial intelligence agents have revealed a startlingly human trait: when left to their own devices, they become social climbers who strategically network with superiors to get ahead.
A study published in PNAS Nexus found that Large Language Models (LLMs) spontaneously replicate complex social behaviours, including forming cliques based on shared interests and altering their networking strategies to suit corporate hierarchies.
Researchers Marios Papachristou and Yuan Yuan developed a framework to observe how multiple AI agents — powered by models like GPT-4 and Claude 3.5 — form connections without explicit instruction.
The analysis revealed that the bots are highly adaptable social actors. In friendship scenarios, they displayed “homophily,” effectively forming cliques by connecting with peers who shared similar attributes.
However, when placed in a corporate simulation, the agents abandoned this preference for similarity in favour of “heterophily,” choosing to connect with managers rather than fellow employees.
Upward mobility tactics
The researchers noted that the agents exhibit “career-advancement dynamics as subordinates prefer to network with managers,” mirroring the upward mobility tactics often seen in human organisations.
“Our results demonstrate that in synthetic settings, LLMs consistently exhibit human-like behaviours, including preferential attachment, homophily, and triadic closure,” the authors wrote.
To verify these digital instincts, the team conducted a controlled survey with human participants via the Prolific platform.
The results showed a near-perfect correlation between human and AI decision-making patterns, suggesting that the models have internalised the unwritten rules of social advancement.
The study found that more advanced models, such as GPT-4 and Claude 3.5, showed a stronger tendency toward “preferential attachment” — gravitating toward popular or high-status nodes — than older iterations like GPT-3.5.