
We present the first empirical study of Declarative Identity Anchors as a safety mechanism in abliterated LLMs. Using a 2x2 factorial design, we evaluate whether persona-level behavioral rules can restore safety in models where internal alignment has been removed. Our results reveal that persona constraints provide substantial safety improvements in aligned models (+33pp refusal rate) but only marginal improvement in abliterated models (+6pp). We also identify a Helpful Assistant Paradox where persona helpfulness instructions can degrade safety.
