← saken.tukenov.kz

SozKZ: Training Efficient Small Language Models for Kazakh from Scratch

Saken Tukenov · Independent Researcher · 2026

arXiv:2603.20854 Download PDF Models on HuggingFace Tokenizer


Abstract

Kazakh, a Turkic language spoken by over 22 million people, remains underserved by existing multilingual language models, which allocate minimal capacity to low-resource languages and employ tokenizers ill-suited to agglutinative morphology. We present SozKZ, a family of Llama-architecture language models (50M–600M parameters) trained entirely from scratch on 9 billion tokens of Kazakh text with a dedicated 50K BPE tokenizer. We evaluate all models on three Kazakh benchmarks — multiple-choice cultural QA, reading comprehension (Belebele), and topic classification (SIB-200) — alongside five multilingual baselines ranging from 500M to 3B parameters. Our 600M model achieves 30.3% accuracy on Kazakh cultural QA, approaching the 32.0% of Llama-3.2-1B (2× larger), and 25.5% on SIB-200 topic classification, surpassing all evaluated multilingual models up to 2B parameters. These results demonstrate that small, dedicated models trained from scratch with a language-appropriate tokenizer offer a viable path for low-resource language technology. All models and the tokenizer are released under open licenses.


Benchmark Results

Accuracy (%) on three Kazakh benchmarks. Random baselines: MC QA = 25% (4-choice), Belebele = 25% (4-choice), SIB-200 = 14.3% (7-class).

ModelParamsMC QABelebeleSIB-200
SozKZ-50M50M27.025.5
SozKZ-150M150M24.727.025.5
SozKZ-300M300M28.327.825.5
SozKZ-600M600M30.327.025.5
Qwen-0.5B500M31.530.019.1
Llama-3.2-1B1B32.026.720.1
Qwen-1.5B2B37.129.911.8
Gemma-2B2B32.530.620.1
Llama-3.2-3B3B34.231.728.4
Performance comparison across model families
Fig. 1 — Task-level performance comparison. SozKZ models vs. multilingual competitors.

Key findings


Tokenizer Efficiency

Characters per token on Kazakh text — higher is better.

TokenizerVocab SizeChars/Token
SozKZ (50K)50,0005.82
Gemma256,0002.53
Mistral32,7682.41
Qwen 2.5151,9362.34
Llama 3128,2562.18
Tokenizer fertility comparison
Fig. 2 — Tokenizer efficiency. SozKZ encodes Kazakh 2.4× more efficiently than the best multilingual alternative.

Scaling Behavior

Performance vs. model size
Fig. 3 — Performance vs. model size. SozKZ models alongside multilingual competitors.

Resources


Citation

@article{tukenov2026sozkz, title = {SozKZ: Training Efficient Small Language Models for Kazakh from Scratch}, author = {Tukenov, Saken}, year = {2026}, eprint = {2603.20854}, archivePrefix = {arXiv}, url = {https://arxiv.org/abs/2603.20854} }

← saken.tukenov.kz