
Large Language Models (LLMs) have demonstrated effectiveness across various tasks, yet they remain susceptible to malicious exploitation. Adversaries can circumvent LLMs’ safety constraints (“jail”) through carefully engineered “jailbreaking” prompts. Researchers have developed various jailbreak techniques leveraging optimization, obfuscation, and persuasive tactics to assess LLM security. However, these approaches frame LLMs as passive targets of manipulation, overlooking their capacity for active reasoning. In this work, we introduce Persu-Agent, a novel jailbreak framework grounded in Greenwald’s Cognitive Response Theory. Unlike previous approaches that focus primarily on prompt design, we target the LLM’s internal reasoning process. Persu-Agent prompts the model to generate its justifications for harmful queries, effectively persuading itself. Experiments on advanced open-source and commercial LLMs show that Persu-Agent achieves an average jailbreak success rate of 84%, outperforming existing SOTA methods. Our findings offer new insights into the cognitive tendencies of LLMs and contribute to developing more secure and robust LLMs.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
