ArXiv TLDR

Stochasticity in Tokenisation Improves Robustness

🐦 Tweet
2604.16037

Sophie Steger, Rui Li, Sofiane Ennadir, Anya Sims, Arno Solin + 2 more

cs.CL

TLDR

Stochastic tokenization during pre-training and fine-tuning significantly improves large language model robustness against adversarial and random perturbations.

Key contributions

  • Demonstrates stochastic tokenization improves LLM robustness to adversarial and random perturbations.
  • Systematically studies stochastic tokenization across various learning regimes, datasets, and architectures.
  • Shows canonical tokenization reduces Llama-1b accuracy by 29.8% on non-canonical inputs.
  • Confirms stochastic tokenization preserves accuracy without increasing inference cost.

Why it matters

LLMs are vulnerable to tokenization perturbations, impacting their robustness. This paper offers a practical solution by showing that stochastic tokenization significantly enhances model resilience. It's a crucial step towards building more robust and reliable AI systems.

Original Abstract

The widespread adoption of large language models (LLMs) has increased concerns about their robustness. Vulnerabilities in perturbations of tokenisation of the input indicate that models trained with a deterministic canonical tokenisation can be brittle to adversarial attacks. Recent studies suggest that stochastic tokenisation can deliver internal representations that are less sensitive to perturbations. In this paper, we analyse how stochastic tokenisations affect robustness to adversarial attacks and random perturbations. We systematically study this over a range of learning regimes (pre-training, supervised fine-tuning, and in-context learning), data sets, and model architectures. We show that pre-training and fine-tuning with uniformly sampled stochastic tokenisations improve robustness to random and adversarial perturbations. Evaluating on uniformly sampled non-canonical tokenisations reduces the accuracy of a canonically trained Llama-1b model by 29.8%. We find that training with stochastic tokenisation preserves accuracy without increasing inference cost.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.