ArXiv TLDR

Code-Switching Information Retrieval: Benchmarks, Analysis, and the Limits of Current Retrievers

🐦 Tweet
2604.17632

Qingcheng Zeng, Yuheng Lu, Zeqi Zhou, Heli Qi, Puxuan Yu + 4 more

cs.IR

TLDR

This paper introduces new benchmarks (CSR-L, CS-MTEB) to show how code-switching severely degrades information retrieval performance, even for multilingual models.

Key contributions

  • Introduces CSR-L, a human-annotated dataset for authentic mixed-language IR queries.
  • Demonstrates code-switching severely degrades IR performance, even for robust multilingual models.
  • Proposes CS-MTEB, a comprehensive benchmark covering 11 diverse code-switching tasks.
  • Shows standard multilingual techniques are insufficient to fully address these performance drops.

Why it matters

This paper addresses a crucial gap in IR by quantifying the impact of code-switching, a common linguistic phenomenon. It provides essential benchmarks and analysis, revealing the fragility of current systems and setting a clear direction for future multilingual IR research.

Original Abstract

Code-switching is a pervasive linguistic phenomenon in global communication, yet modern information retrieval systems remain predominantly designed for, and evaluated within, monolingual contexts. To bridge this critical disconnect, we present a holistic study dedicated to code-switching IR. We introduce CSR-L (Code-Switching Retrieval benchmark-Lite), constructing a dataset via human annotation to capture the authentic naturalness of mixed-language queries. Our evaluation across statistical, dense, and late-interaction paradigms reveals that code-switching acts as a fundamental performance bottleneck, degrading the effectiveness of even robust multilingual models. We demonstrate that this failure stems from substantial divergence in the embedding space between pure and code-switched text. Scaling this investigation, we propose CS-MTEB, a comprehensive benchmark covering 11 diverse tasks, where we observe performance declines of up to 27%. Finally, we show that standard multilingual techniques like vocabulary expansion are insufficient to resolve these deficits completely. These findings underscore the fragility of current systems and establish code-switching as a crucial frontier for future IR optimization.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.