Dialect vs Demographics: Quantifying LLM Bias from Implicit Linguistic Signals vs. Explicit User Profiles
TLDR
LLMs exhibit a paradox: explicit demographic profiles trigger safety filters, while implicit dialects bypass them, leading to less sanitized content.
Key contributions
- Explicit identity prompts activate aggressive safety filters, increasing refusal rates for Black users.
- Implicit dialect cues (e.g., AAVE, Singlish) act as a "dialect jailbreak," reducing refusals to near zero.
- This "dialect jailbreak" results in less sanitized, potentially more hostile content for dialect speakers.
- LLM safety alignment is brittle, over-indexed on explicit keywords, creating bifurcated user experiences.
Why it matters
This paper reveals a critical paradox in LLM safety, where implicit linguistic signals bypass filters that explicit identity triggers. It exposes how current alignment techniques are brittle and create unequal, potentially harmful, information landscapes for dialect speakers, underscoring the need for more robust, generalized safety mechanisms.
Original Abstract
As state-of-the-art Large Language Models (LLMs) have become ubiquitous, ensuring equitable performance across diverse demographics is critical. However, it remains unclear whether these disparities arise from the explicitly stated identity itself or from the way identity is signaled. In real-world interactions, users' identity is often conveyed implicitly through a complex combination of various socio-linguistic factors. This study disentangles these signals by employing a factorial design with over 24,000 responses from two open-weight LLMs (Gemma-3-12B and Qwen-3-VL-8B), comparing prompts with explicitly announced user profiles against implicit dialect signals (e.g., AAVE, Singlish) across various sensitive domains. Our results uncover a unique paradox in LLM safety where users achieve ``better'' performance by sounding like a demographic than by stating they belong to it. Explicit identity prompts activate aggressive safety filters, increasing refusal rates and reducing semantic similarity compared to our reference text for Black users. In contrast, implicit dialect cues trigger a powerful ``dialect jailbreak,'' reducing refusal probability to near zero while simultaneously achieving a greater level of semantic similarity to the reference texts compared to Standard American English prompts. However, this ``dialect jailbreak'' introduces a critical safety trade-off regarding content sanitization. We find that current safety alignment techniques are brittle and over-indexed on explicit keywords, creating a bifurcated user experience where ``standard'' users receive cautious, sanitized information while dialect speakers navigate a less sanitized, more raw, and potentially a more hostile information landscape and highlights a fundamental tension in alignment--between equitable and linguistic diversity--and underscores the need for safety mechanisms that generalize beyond explicit cues.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.