
Large Language Models remain vulnerable to jailbreak attacks that bypass traditional safety measures. We propose Layer-Native Safety Clamping, a representation engineering approach that operates directly within the model's activation space. By learning harm directions from contrastive safe/harmful pairs and clamping activations that exceed learned thresholds, our method provides safety guarantees that cannot be bypassed through prompt manipulation alone. We integrate this approach into INL (Inertial Neural Learning) dynamics and release a 10K contrastive safety dataset. Code and dataset available at: https://huggingface.co/datasets/Pacific-Prime/safety_dataset
representation engineering, jailbreak resistance, contrastive learning, transformer, activation clamping, LLM safety
representation engineering, jailbreak resistance, contrastive learning, transformer, activation clamping, LLM safety
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
