ArXiv TLDR

PairAlign: A Framework for Sequence Tokenization via Self-Alignment with Applications to Audio Tokenization

🐦 Tweet
2605.06582

Adhiraj Banerjee, Vipul Arora

cs.LGcs.CLcs.SD

TLDR

PairAlign introduces a self-alignment framework for compact audio tokenization, treating it as conditional sequence generation to improve consistency and edit-distance preservation.

Key contributions

  • Introduces PairAlign, a novel framework for compact audio tokenization using sequence-level self-alignment.
  • Models tokenization as conditional sequence generation, learning token identity, order, and length autoregressively.
  • Achieves strong cross-view consistency and reduces token count by 55% while preserving edit-distance search.
  • Outperforms dense geometric tokenizers in length control and edit trajectory stability under shifts.

Why it matters

This paper addresses the challenge of learning consistent and compact audio tokens, which is crucial for operations like comparison and retrieval. PairAlign's sequence-level self-alignment approach offers a significant improvement over local token assignment methods. It provides a more robust and efficient way to represent audio symbolically, akin to how language uses tokens.

Original Abstract

Many operations on sensory data -- comparison, memory, retrieval, and reasoning -- are naturally expressed over discrete symbolic structures. In language this interface is given by tokens; in audio, it must be learned. Existing audio tokenizers rely on quantization, clustering, or codec reconstruction, assigning tokens locally, so sequence consistency, compactness, length control, termination, and edit similarity are rarely optimized directly. We introduce PairAlign, a framework for compact audio tokenization through sequence-level self-alignment. PairAlign treats tokenization as conditional sequence generation: an encoder maps speech to a continuous condition, and an autoregressive decoder generates tokens from BOS, learning token identity, order, length, and EOS placement. Given two content-preserving views, each view's sequence is trained to be likely under the other's representation, while unrelated examples provide competing sequences. This gives a scalable surrogate for edit-distance preservation while discouraging many-to-one collapse. PairAlign starts from VQ-style tokenization and refines it with EMA-teacher targets, cross-paired teacher forcing, prefix corruption, likelihood contrast, and length control. On 3-second speech, PairAlign learns compact, non-degenerate sequences with broad vocabulary usage and strong cross-view consistency. On TIMIT retrieval, it preserves edit-distance search while reducing archive token count by 55%. A continuous-sweep probe shows lower local overlap than a dense geometric tokenizer, but stronger length control and bounded edit trajectories under 100 ms shifts. PairAlign is a sequence-symbolic predictive learner: like JEPA-style objectives, it predicts an abstract target from another view as a learned variable-length symbolic sequence, not a continuous latent.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.