ArXiv TLDR

From Where Words Come: Efficient Regularization of Code Tokenizers Through Source Attribution

🐦 Tweet
2604.14053

Pavel Chizhov, Egor Bogomolov, Ivan P. Yamshchikov

cs.CL

TLDR

This paper introduces Source-Attributed BPE (SA-BPE) to regularize code tokenizers, reducing under-trained tokens caused by data imbalance.

Key contributions

  • Identifies code tokenizers' issue with under-trained tokens from data imbalance.
  • Proposes Source-Attributed BPE (SA-BPE) to regularize BPE training.
  • SA-BPE modifies BPE objective and introduces merge skipping.
  • Significantly reduces under-trained tokens while preserving inference.

Why it matters

Poor tokenization hinders LLM efficiency and safety, leading to issues like jailbreaks. This paper addresses the problem of under-trained tokens in code tokenizers due to data imbalance. SA-BPE provides a practical method to improve tokenizer quality, enhancing LLM performance and robustness.

Original Abstract

Efficiency and safety of Large Language Models (LLMs), among other factors, rely on the quality of tokenization. A good tokenizer not only improves inference speed and language understanding but also provides extra defense against jailbreak attacks and lowers the risk of hallucinations. In this work, we investigate the efficiency of code tokenization, in particular from the perspective of data source diversity. We demonstrate that code tokenizers are prone to producing unused, and thus under-trained, tokens due to the imbalance in repository and language diversity in the training data, as well as the dominance of source-specific, repetitive tokens that are often unusable in future inference. By modifying the BPE objective and introducing merge skipping, we implement different techniques under the name Source-Attributed BPE (SA-BPE) to regularize BPE training and minimize overfitting, thereby substantially reducing the number of under-trained tokens while maintaining the same inference procedure as with regular BPE. This provides an effective tool suitable for production use.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.