Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Journal . null
Data sources: ZENODO
addClaim

This Research product is the result of merged Research products in OpenAIRE.

You have already added 0 works in your ORCID record related to the merged Research product.

Evo-Recursive Constraint Prompting: An Operator-Based Framework for Structured HumanLLM Interaction

Authors: Mohabeer, Heman;

Evo-Recursive Constraint Prompting: An Operator-Based Framework for Structured HumanLLM Interaction

Abstract

Large language models (LLMs) exhibit strong generative and reasoning abilities, but theirperformance during interactive use remains highly sensitive to underspecified instructions,evolving user intent, and missing structural constraints. Although practitioners routinelyrefine prompts over multiple iterations, these interaction patterns are typically ad hoc andlack a formal framework that makes them reproducible or analyzable.We present Evo-Recursive Constraint Prompting (ERCP), a structured methodologythat formalizes common iterative prompting behaviors into four operator classes: recursiverefinement, constraint tightening, contradiction probing, and problem mutation. ERCP providesan operator-level view of how humans guide LLM reasoning across iterations, enablingexplicit tracking of constraint evolution and error correction.Rather than introducing new reasoning capabilities, ERCP systematizes widely used butinformal humanLLM prompting practices into an explicit workflow supported by a mathematicalabstraction and an algorithmic template. Through controlled case studies acrossreasoning and synthesis tasks, we show that formalizing these operations leads to more stableand interpretable iterative reasoning compared with unconstrained prompt refinement.ERCP offers a reproducible foundation for analyzing and improving structured human LLMinteraction.

Powered by OpenAIRE graph
Found an issue? Give us feedback