ArXiv TLDR

MoshiRAG: Asynchronous Knowledge Retrieval for Full-Duplex Speech Language Models

🐦 Tweet
2604.12928

Chung-Ming Chien, Manu Orsini, Eugene Kharitonov, Neil Zeghidour, Karen Livescu + 1 more

cs.CLeess.AS

TLDR

MoshiRAG improves full-duplex speech language models' factuality using asynchronous knowledge retrieval, maintaining real-time interactivity.

Key contributions

  • Proposes MoshiRAG, a modular system for full-duplex speech LMs, combining a compact interface with selective retrieval.
  • Uses asynchronous retrieval during natural conversation gaps to ground responses in external knowledge without delay.
  • Achieves factuality on par with top non-duplex models while preserving real-time full-duplex interactivity.
  • Supports plug-and-play retrieval methods without retraining and excels in out-of-domain reasoning tasks.

Why it matters

Full-duplex speech models are crucial for natural conversational AI, but factuality has been a major hurdle. MoshiRAG offers a practical solution by integrating external knowledge asynchronously, making these interactive systems more reliable. This breakthrough enables real-time, factual conversations without compromising user experience.

Original Abstract

Speech-to-speech language models have recently emerged to enhance the naturalness of conversational AI. In particular, full-duplex models are distinguished by their real-time interactivity, including handling of pauses, interruptions, and backchannels. However, improving their factuality remains an open challenge. While scaling the model size could address this gap, it would make real-time inference prohibitively expensive. In this work, we propose MoshiRAG, a modular approach that combines a compact full-duplex interface with selective retrieval to access more powerful knowledge sources. Our asynchronous framework enables the model to identify knowledge-demanding queries and ground its responses in external information. By leveraging the natural temporal gap between response onset and the delivery of core information, the retrieval process can be completed while maintaining a natural conversation flow. With this approach, MoshiRAG achieves factuality comparable to the best publicly released non-duplex speech language models while preserving the interactivity inherent to full-duplex systems. Moreover, our flexible design supports plug-and-play retrieval methods without retraining and demonstrates strong performance on out-of-domain mathematical reasoning tasks.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.