SpeechParaling-Bench: A Comprehensive Benchmark for Paralinguistic-Aware Speech Generation
Ruohan Liu, Shukang Yin, Tao Wang, Dong Zhang, Weiji Zhuang + 4 more
TLDR
SpeechParaling-Bench is a new benchmark for evaluating paralinguistic-aware speech generation in LALMs, using fine-grained features and a novel LALM-based judge.
Key contributions
- Introduces SpeechParaling-Bench, a benchmark with over 100 fine-grained paralinguistic features.
- Includes 1,000+ English-Chinese parallel speech queries across three challenging tasks.
- Proposes an LALM-based pairwise comparison pipeline for reliable, scalable, and less subjective evaluation.
- Reveals current LALMs significantly struggle with paralinguistic control and dynamic modulation.
Why it matters
This paper introduces a crucial benchmark, SpeechParaling-Bench, to address the limitations of evaluating paralinguistic cues in LALMs. Its novel LALM-based evaluation method offers a scalable and less subjective way to assess models. The findings highlight significant deficiencies in current LALMs, underscoring the urgent need for improved paralinguistic modeling to create more human-aligned voice assistants.
Original Abstract
Paralinguistic cues are essential for natural human-computer interaction, yet their evaluation in Large Audio-Language Models (LALMs) remains limited by coarse feature coverage and the inherent subjectivity of assessment. To address these challenges, we introduce SpeechParaling-Bench, a comprehensive benchmark for paralinguistic-aware speech generation. It expands existing coverage from fewer than 50 to over 100 fine-grained features, supported by more than 1,000 English-Chinese parallel speech queries, and is organized into three progressively challenging tasks: fine-grained control, intra-utterance variation, and context-aware adaptation. To enable reliable evaluation, we further develop a pairwise comparison pipeline, in which candidate responses are evaluated against a fixed baseline by an LALM-based judge. By framing evaluation as relative preference rather than absolute scoring, this approach mitigates subjectivity and yields more stable and scalable assessments without costly human annotation. Extensive experiments reveal substantial limitations in current LALMs. Even leading proprietary models struggle with comprehensive static control and dynamic modulation of paralinguistic features, while failure to correctly interpret paralinguistic cues accounts for 43.3% of errors in situational dialogue. These findings underscore the need for more robust paralinguistic modeling toward human-aligned voice assistants.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.