Listening Alone, Understanding Together: Collaborative Context Recovery for Privacy-Aware AI
Tanmay Srivastava, Amartya Basu, Shubham Jain, Vaishnavi Ranganathan
TLDR
CONCORD is a privacy-aware framework enabling always-listening AI to safely recover missing context through collaborative assistant-to-assistant exchanges.
Key contributions
- Introduces CONCORD, an A2A framework for privacy-aware, always-listening AI.
- Enforces owner-only speech capture via real-time speaker verification to preserve privacy.
- Recovers missing context through spatio-temporal resolution and relationship-aware A2A queries.
- Achieves 91.4% gap detection recall and 97% true negative rate in privacy-sensitive disclosure.
Why it matters
Always-listening AI faces significant privacy hurdles, limiting its social adoption. CONCORD offers a novel approach by reframing context recovery as a secure, negotiated exchange between assistants. This work provides a practical and effective solution for deploying proactive conversational agents responsibly.
Original Abstract
We introduce CONCORD, a privacy-aware asynchronous assistant-to-assistant (A2A) framework that leverages collaboration between proactive speech-based AI. As agents evolve from reactive to always-listening assistants, they face a core privacy risk (of capturing non-consenting speakers), which makes their social deployment a challenge. To overcome this, we implement CONCORD, which enforces owner-only speech capture via real-time speaker verification, producing a one-sided transcript that incurs missing context but preserves privacy. We demonstrate that CONCORD can safely recover necessary context through (1) spatio-temporal context resolution, (2) information gap detection, and (3) minimal A2A queries governed by a relationship-aware disclosure. Instead of hallucination-prone inferring, CONCORD treats context recovery as a negotiated safe exchange between assistants. Across a multi-domain dialogue dataset, CONCORD achieves 91.4% recall in gap detection, 96% relationship classification accuracy, and 97% true negative rate in privacy-sensitive disclosure decisions. By reframing always-listening AI as a coordination problem between privacy-preserving agents, CONCORD offers a practical path toward socially deployable proactive conversational agents.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.