
Large language models (LLMs) exhibit strong generative and reasoning abilities, but theirperformance during interactive use remains highly sensitive to underspecified instructions,evolving user intent, and missing structural constraints. Although practitioners routinelyrefine prompts over multiple iterations, these interaction patterns are typically ad hoc andlack a formal framework that makes them reproducible or analyzable.We present Evo-Recursive Constraint Prompting (ERCP), a structured methodologythat formalizes common iterative prompting behaviors into four operator classes: recursiverefinement, constraint tightening, contradiction probing, and problem mutation. ERCP providesan operator-level view of how humans guide LLM reasoning across iterations, enablingexplicit tracking of constraint evolution and error correction.Rather than introducing new reasoning capabilities, ERCP systematizes widely used butinformal humanLLM prompting practices into an explicit workflow supported by a mathematicalabstraction and an algorithmic template. Through controlled case studies acrossreasoning and synthesis tasks, we show that formalizing these operations leads to more stableand interpretable iterative reasoning compared with unconstrained prompt refinement.ERCP offers a reproducible foundation for analyzing and improving structured human LLMinteraction.
