From Reactive to Proactive: Assessing the Proactivity of Voice Agents via ProVoice-Bench
TLDR
ProVoice-Bench evaluates proactive voice agents with novel tasks, revealing key gaps in current multimodal LLMs.
Key contributions
- Introduces ProVoice-Bench, the first benchmark for proactive voice agents.
- Defines four novel tasks targeting proactive intervention and monitoring.
- Curates 1,182 high-quality samples via a multi-stage data synthesis pipeline.
- Identifies performance gaps in reasoning and over-triggering in top multimodal LLMs.
Why it matters
This paper addresses the overlooked challenge of proactive voice agent evaluation, guiding future improvements for more natural and context-aware interactions.
Original Abstract
Recent advancements in LLM agents are gradually shifting from reactive, text-based paradigms toward proactive, multimodal interaction. However, existing benchmarks primarily focus on reactive responses, overlooking the complexities of proactive intervention and monitoring. To bridge this gap, we introduce ProVoice-Bench, the first evaluation framework specifically designed for proactive voice agents, featuring four novel tasks. By leveraging a multi-stage data synthesis pipeline, we curate 1,182 high-quality samples for rigorous testing. Our evaluation of state-of-the-art Multimodal LLMs reveals a significant performance gap, particularly regarding over-triggering and reasoning capabilities. These findings highlight the limitations of current models and offer a roadmap for developing more natural, context-aware proactive agents.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.