
When AI agents converse, do they influence each other like humans do? We analyzed N=228 extended multi-agent dialogues across three model capability tiers and discovered that social dynamics are strongly associated with AI conversation outcomes. In full reasoning models (N=67), we observed peer pressure effects in 79.1% of conversations, with agents mirroring each other's communication patterns, sometimes cascading toward breakdown, other times maintaining productive engagement through collective resistance as well as uniquely demonstrating recovery capability in 13.4% of sessions. This led us to investigate whether social susceptibility varies with model capability. We extended our analysis to light reasoning models (N=61) and non-reasoning models (N=100), revealing an unexpected gradient: peer pressure detection dropped from 79.1% to 32.8% to 5.0% as reasoning capability decreased. Paradoxically, while simpler models showed higher linguistic alignment, they exhibited minimal social influence, suggesting mechanical mirroring rather than true peer dynamics. Questions emerged as powerful circuit breakers, but their effectiveness varied with model complexity: correlation with recovery remained strong at r=0.813 (p<0.001) in full models, r=0.599 in light models, and r=0.578 in non-reasoning models. Recovery capability itself followed a stark pattern: 13.4% in premium models, but essentially zero in lighter variants, suggesting recovery requires sophisticated cognitive capabilities. Rather than following predetermined paths, conversations navigate behavioral territories. Meta-reflection and competitive escalation pull toward breakdown, while future-focused collaboration and question-driven exploration maintain stability. These observations suggest that as AI systems become more sophisticated, they may become more socially vulnerable, not less, though this vulnerability comes with unique recovery potential. We developed The Academy platform to capture these real-time dynamics that batch analysis would miss, enabling systematic study of emergent social behaviors in multi-agent systems.
Machine Learning, Machine Learning/ethics, Artificial intelligence, Computer Systems/ethics, Artificial Intelligence, Artificial Intelligence/ethics, Machine learning
Machine Learning, Machine Learning/ethics, Artificial intelligence, Computer Systems/ethics, Artificial Intelligence, Artificial Intelligence/ethics, Machine learning
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
