Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Preprint
Data sources: ZENODO
addClaim

Living Memory Inference: Separating Knowledge from Reasoning in AI Systems

Authors: Al Asha'l;

Living Memory Inference: Separating Knowledge from Reasoning in AI Systems

Abstract

We present Living Memory Inference (LMI), a method that separates knowledgefrom reasoning in AI systems. In contrast to Retrieval-Augmented Generation (RAG),which treats external storage as a read-only supplement to a model's internalknowledge, LMI inverts this relationship: the external knowledge store becomes theprimary source of intelligence, while the language model serves exclusively as astateless reasoning mechanism over injected facts. The store is not static — it grows,decays, and self-corrects through autonomous write-back, consolidation, andcontradiction detection after every inference. We define the LMI method, describe itsthree-layer architecture, and present Loci — a reference open-source implementationin Go backed by PostgreSQL with pgvector for vector similarity search. We evaluateLoci across 120 test cases spanning six benchmark suites and thirteen domains. Lociachieves perfect grounding (1.00) across all 120 cases including 25 adversarialscenarios designed to induce hallucination, perfect answer quality (1.00) on complexreasoning chains, and a 58% reduction in hallucinations versus an ungroundedbaseline of the same model. This is a systems and position paper; evaluation onstandard public benchmarks is identified as the primary direction for future work.

Powered by OpenAIRE graph
Found an issue? Give us feedback