Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Preprint . null
Data sources: ZENODO
addClaim

This Research product is the result of merged Research products in OpenAIRE.

You have already added 0 works in your ORCID record related to the merged Research product.

Efficient Training of Robust Traditional Chinese LLaMA-1B on a Single Consumer GPU: Continual Pre-training, SFT, and DPO

Authors: 遲, 佑成; 段, 明濤; 侯, 詠皓;

Efficient Training of Robust Traditional Chinese LLaMA-1B on a Single Consumer GPU: Continual Pre-training, SFT, and DPO

Abstract

Small Language Models (SLMs) enable cost-effective, on-device and latency-sensitiveAI applications, yet their deployment in Traditional Chinese (TC) remains hindered by token-level instability—models unpredictably emit non-TC characters or code-switch into other lan-guages. We address this practical reliability gap by creating PureTC-1B, a three-stage stabili-zation pipeline for Llama-3.2-1B-Instruct (an open-weight, instruction-tuned model releasedby Meta) [1] using parameter-efficient LoRA adapters [2]. Our method combines ContinualPre-Training (CPT) on TC-centric corpora, Supervised Fine-Tuning (SFT) with instructiondata, and Direct Preference Optimization (DPO) [3] using TC-adherence preferences to im-prove monolingual robustness without full-model retraining.On a benchmark designed to simulate real-world usage, PureTC-1B achieves a 51.3%relative reduction (micro-average) in non-TC output tokens versus the base model. On aNamed Entity Translation (NET) task, PureTC-1B further reduces incorrect-language to-kens by 77.2% relative to Llama-3B and 57.2% relative to Qwen-1.5B, indicating that robust2 of 17TC adherence is attainable even at the 1B scale. The pipeline is reproducible, adapter-only, andhardware-friendly, offering practitioners a practical recipe to enhance language stability for TCand potentially other non-English languages.

Powered by OpenAIRE graph
Found an issue? Give us feedback