
This paper proposes the L2MP Framework — a Logic-Layered Modular and Memory-Preserving AI architecture — designed to address limitations in current LLMs such as lack of persistent memory, inability to self-learn, and weak prompt structure handling. It introduces multi-module routing for task segmentation (e.g., summarization, coding, reasoning), user-specific memory layers stored via vector databases, and ethical real-time web integration for continual learning. Experimental results demonstrate significant improvements in accuracy, relevance, and user satisfaction. L2MP aims to set a foundation for more intelligent, personalized, and autonomous AI assistants.
Prompt Engineering, LLM, Memory AI, Modular ai, Self-Learning Systems, AI Agents
Prompt Engineering, LLM, Memory AI, Modular ai, Self-Learning Systems, AI Agents
| citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
