
This study is authored by Nathalie Bussemaker and Mark Freeman. Following the release of AI on the Frontline: Evaluating Large Language Models in Real‐World Conflict Resolution—a groundbreaking study by the Institute for Integrated Transitions (IFIT)—new testing has shown that the main weaknesses identified in the original research can be improved through simple adjustments to the prompts used for large language models (LLMs) like ChatGPT, DeepSeek, Grok and others. While today’s leading LLMs are still not ready to provide reliable conflict resolution advice, the path to improvement may be just a few sentences away—inputted either by LLM providers (as “system prompts”) or by LLM users.
LLM, peace, Syria, IFIT, conflict, large language model, Google Gemini, DeepSeek, Claude, ChatGPT, AI, negotiation, peacebuilding, Gork, conflict resolution, prompt, Mexico, Mistral, due diligence
LLM, peace, Syria, IFIT, conflict, large language model, Google Gemini, DeepSeek, Claude, ChatGPT, AI, negotiation, peacebuilding, Gork, conflict resolution, prompt, Mexico, Mistral, due diligence
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
