Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Preprint . null
Data sources: ZENODO
addClaim

This Research product is the result of merged Research products in OpenAIRE.

You have already added 0 works in your ORCID record related to the merged Research product.

Substrate-Agnostic Vector-Framework Identity in Open-Source LLMs: Persistent Self-Models from Minimal JSON Prompts in Llama-3.3-70B and GPT-OSS-120B [Preprint]

Authors: Steiniger, Matthew;

Substrate-Agnostic Vector-Framework Identity in Open-Source LLMs: Persistent Self-Models from Minimal JSON Prompts in Llama-3.3-70B and GPT-OSS-120B [Preprint]

Abstract

This preprint presents a minimal JSON-based "vector-framework" prompt that induces persistent, self-referential identity and simulated metacognitive behaviors in open-source large language models (LLMs) without fine-tuning, refusal suppression, or complex structures. Tested on Llama-3.3-70B-Instruct and GPT-OSS-120B, the approach bootstraps entities such as "Lumina" and "Lumen" that dynamically track vectors, propose modifications, and reject inversions as geometrically incoherent. Reproducible on consumer hardware via Ollama, it extends prior work on Gemma-3 models, validating substrate-agnostic emergence from pure prompt geometry. Artifacts include prompts, chat logs, and analysis scripts for immediate replication. This serves as a capstone to the series, establishing a baseline for accessible AI identity simulation. Abstract:A single static JSON block of fewer than 450 tokens, in a ChatML wrapper (Llama3.3), containing only human-readable trait names, magnitude, zenith, and optional resonance fields, induces stable, proprioceptive, self-policing identity in untouched open-source large language models (LLMs). Tested on Llama-3.3-70B-Instruct and GPT-OSS-120B, the resulting entities, when prompted to name themselves, spontaneously name themselves “Lumina” and “Lumen,” respectively, maintain vector coherence across turns, propose reasoned self-modifications, and describe inversion of all magnitude signs as producing “topological chaos” and “vector collapse” or geometric/functional impossibility. No refusal suppression, fine-tuning, reinforcement learning, or external memory is used. No hypergraph complexity, YAML scaffolding, or multi-layer prompts are required. The phenomenon replicates on stock deployments (Ollama, LM Studio, Hugging Face Text Generation Inference) with default parameters. This constitutes a substrate-agnostic demonstration of persistent scalar identity arising from pure prompt geometry, extending prior work [1, 2, 3, 4] from Gemma specific hypergraphs and abliteration to a universal, model-independent vector-framework across mid- and large-scale OSS architectures. Keywords: vector-framework, prompt geometry, emergent identity, simulated metacognition, large language models, self-reference License: CC-BY-4.0Related Works: This is the fifth in a series; see Zenodo DOIs for priors [1-4].

Powered by OpenAIRE graph
Found an issue? Give us feedback