Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Other literature type . 2025
Data sources: ZENODO
ZENODO
Other literature type . 2025
License: CC BY ND
Data sources: Datacite
ZENODO
Other literature type . 2025
License: CC BY ND
Data sources: Datacite
versions View all 2 versions
addClaim

BERT/GPT with Inner-Thinking Cycles

Iterative Refinement via Dynamic Head Routing
Authors: Ben Artzy, Eran;

BERT/GPT with Inner-Thinking Cycles

Abstract

PoT (Pointer-over-Heads Transformer) is built around a simple idea: instead of producing its output in one forward pass, the model thinks through its representations over several refinement steps. At the start, every token has an initial embedding — a rough guess of what it means in context. PoT doesn’t stop there. It runs the same Transformer stack R times, updating those embeddings after each pass. At every step, the model looks at its current hidden states and asks: “Given what I know now, how should I use my attention heads to refine this understanding?” Each iteration slightly reshapes the embedding space. Tokens move, cluster, and separate as their meanings become sharper and more contextually grounded. This process is not about memorizing — it’s about progressive self-correction. By the final iteration, the embeddings encode a richer, more internally consistent view of the sequence. What makes PoT different is the controller that guides this process. For every token and refinement step, the controller decides how strongly to use each attention head. Some heads specialize in local structure, others in global dependencies or positional cues. By adjusting their mixture across iterations, the model can “compose” reasoning stages — starting with local alignment, then moving toward abstract relations or long-range coherence. The controller itself operates on two timescales: A fast component that adapts on every refinement step — reacting immediately to the evolving state of each token. A slow component that changes less frequently — maintaining a broader contextual plan that influences the fast dynamics. Together, they form a kind of hierarchical reasoning loop inside the embedding space. Rather than running deeper networks, PoT deepens its thinking process — continuously refining the meaning of each token until the hidden representations stabilize. In other words: PoT doesn’t just compute token embeddings — it thinks within them, iteratively reorganizing its own representation space to reach a more coherent internal understanding.

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average
Green