<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=undefined&type=result"></script>');
-->
</script>
Small Language Models (SLMs) enable cost-effective, on-device and latency-sensitiveAI applications, yet their deployment in Traditional Chinese (TC) remains hindered by token-level instability—models unpredictably emit non-TC characters or code-switch into other lan-guages. We address this practical reliability gap by creating PureTC-1B, a three-stage stabili-zation pipeline for Llama-3.2-1B-Instruct (an open-weight, instruction-tuned model releasedby Meta) [1] using parameter-efficient LoRA adapters [2]. Our method combines ContinualPre-Training (CPT) on TC-centric corpora, Supervised Fine-Tuning (SFT) with instructiondata, and Direct Preference Optimization (DPO) [3] using TC-adherence preferences to im-prove monolingual robustness without full-model retraining.On a benchmark designed to simulate real-world usage, PureTC-1B achieves a 51.3%relative reduction (micro-average) in non-TC output tokens versus the base model. On aNamed Entity Translation (NET) task, PureTC-1B further reduces incorrect-language to-kens by 77.2% relative to Llama-3B and 57.2% relative to Qwen-1.5B, indicating that robust2 of 17TC adherence is attainable even at the 1B scale. The pipeline is reproducible, adapter-only, andhardware-friendly, offering practitioners a practical recipe to enhance language stability for TCand potentially other non-English languages.