Who Gets Flagged? The Pluralistic Evaluation Gap in AI Content Watermarking
Alexander Nemecek, Osama Zafar, Yuqiao Xu, Wenbiao Li, Erman Ayday
TLDR
AI watermarking exhibits biases across languages, cultures, and demographics, highlighting a critical gap in current evaluation standards.
Key contributions
- AI watermarking performance varies systematically across languages, cultures, and demographic groups.
- Existing benchmarks fail to report watermarking performance across diverse languages, cultural content, or populations.
- Proposes three new evaluation dimensions: cross-lingual parity, culturally diverse content, and demographic disaggregation.
- Highlights that watermarking is held to a lower fairness standard than the AI models it is meant to govern.
Why it matters
This paper reveals critical biases in AI content watermarking across diverse groups, arguing current evaluations are insufficient. It advocates for rigorous bias auditing, mirroring generative AI standards, to ensure fairness and accuracy before deployment.
Original Abstract
Watermarking is becoming the default mechanism for AI content authentication, with governance policies and frameworks referencing it as infrastructure for content provenance. Yet across text, image, and audio modalities, watermark signal strength, detectability, and robustness depend on statistical properties of the content itself, properties that vary systematically across languages, cultural visual traditions, and demographic groups. We examine how this content dependence creates modality-specific pathways to bias. Reviewing the major watermarking benchmarks across modalities, we find that, with one exception, none report performance across languages, cultural content types, or population groups. To address this, we propose three concrete evaluation dimensions for pluralistic watermark benchmarking: cross-lingual detection parity, culturally diverse content coverage, and demographic disaggregation of detection metrics. We connect these to the governance frameworks currently mandating watermarking deployment and show that watermarking is held to a lower fairness standard than the generative systems it is meant to govern. Our position is that evaluation must precede deployment, and that the same bias auditing requirements applied to AI models should extend to the verification layer.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.