
This manuscript brings together a set of independent interpretability projects I’ve been working on throughout 2025. It focuses on the internal training dynamics of transformer models, with an emphasis on how specific circuits form, stabilize, and interact over the course of training. The work is not a theory of consciousness and doesn’t rely on speculative metaphors. It stays at the level of mechanisms and training behavior. The goal is to give a coherent frame for several patterns that show up repeatedly across mechanistic interpretability experiments. Topics covered include: • early-layer “suppressor” circuits and their role in hedging and calibration• changes in model behavior when suppressor circuits are removed or ablated• circuit-level trajectories observed during grokking• how noise, variance, and optimization pressure shape learning early vs. late in training• synchronization patterns across heads and layers• evidence for narrow “critical windows” where certain circuits reliably emerge• relationships between loss landscape transitions and circuit formation The manuscript is an early draft. Some claims will likely need refinement as more experiments run or new evidence emerges. Feedback from researchers working on mechanistic interpretability, training dynamics, or related areas is welcome.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
