
In this article, we analyse the potential of Large Language Models (LLMs) for social simulation by assessing their ability to: (a) make decisions aligned with explicit preferences; (b) adhere to principles of rationality; and (c) refine their beliefs to anticipate the actions of other agents. Through game-theoretic experiments, our results show that certain models, such as GPT-4.5 and Mistral-Small, exhibit consistent behaviours in simple contexts but struggle with more complex scenarios requiring anticipation of other agents' behaviour. Our study outlines research directions to overcome the current limitations of LLMs.
Social Simulation, Large Language Models, [INFO] Computer Science [cs], Game theory, Multi-Agent Systems
Social Simulation, Large Language Models, [INFO] Computer Science [cs], Game theory, Multi-Agent Systems
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
