Algospeak, Hiding in the Open: The Trade-off Between Legible Meaning and Detection Avoidance
Jan Fillies, Ronald E. Robertson, Jeffrey Hancock
TLDR
This paper explores Algospeak's trade-off between evading detection and maintaining understandability, introducing a framework and dataset to quantify this dynamic.
Key contributions
- Formalizes Algospeak dynamics, showing increased Algospeak decreases detectability and understandability.
- Introduces Majority Understandable Modulation (MUM) to define the point of comprehension loss.
- Presents a framework and dataset (700 items) for generating and studying Algospeak variants.
- Empirically evaluates 7 LLMs on Algospeak interpretation and disinformation detection.
Why it matters
This research is crucial for understanding Algospeak's evolving dynamics, particularly as LLMs become central to content generation and moderation. It offers a foundational framework, dataset, and experimental setup to analyze the trade-off between evasion and legibility, aiding in the development of more effective detection strategies.
Original Abstract
As large language models (LLMs) increasingly mediate both content generation and moderation, linguistic evasion strategies known as Algospeak have intensified the coevolution between evaders and detectors. This research formalizes the underlying dynamics grounded in a joint action model: when Algospeak increases, detectability and understandability decrease. Further, the concept of Majority Understandable Modulation (MUM) is introduced and defined as the modulation level at which additional evasive alteration increases detector evasion but loses comprehension for the majority of recipients. To empirically probe this trade-off, we introduce a reproducible framework that can be used to create meaning-preserving, Algospeak-style variants, based on an existing taxonomy and with tunable modulation levels. Using COVID-19 disinformation as a first proof-by-example setting, we construct a reference dataset of 700 modulated items, drawn from twenty base sentences across five modulation levels and seven strategies. We then run two linked evaluations with seven different language models: one testing for interpretation through meaning recovery and one for disinformation detection through classification. Curve fitting over modulation levels yields an estimate of the Majority Understandable Modulation threshold and enables sensitivity analyses across strategies and models, see Figure 1. Results reveal the characteristic relationships between understandability and modulation. This study lays the groundwork for understanding the dynamics behind Algospeak and provides the framework, dataset, and experimental setups described.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.