The 2026 ACII Dyadic Conversations (DaiKon) Workshop & Challenge
Panagiotis Tzirakis, Alice Baird, Jeffrey Brooks, Emilia Parada-Cabaleiro, Lukas Stappen + 4 more
TLDR
The ACII-DaiKon workshop introduces a new benchmark and challenge for modeling interpersonal affect and social dynamics in dyadic conversations.
Key contributions
- Introduces a benchmark for modeling interpersonal affect and social dynamics in dyadic conversations.
- Features three sub-challenges: influence, turn-taking, and rapport trajectory prediction.
- Utilizes the large, multimodal Hume-DaiKon dataset (945 conversations, 5 languages).
- Establishes baseline performance and fosters cross-disciplinary discussion.
Why it matters
This workshop addresses a critical gap in conversational AI by focusing on the complex, time-evolving dynamics of dyadic interactions, moving beyond speaker-centric models. It provides a much-needed benchmark and dataset to advance research in interpersonal affect, influence, and rapport, fostering more realistic communication models.
Original Abstract
The 2026 ACII Dyadic Conversations (ACII-DaiKon) Workshop & Challenge introduces a benchmark for modeling interpersonal affect and social dynamics in dyadic conversations. Although conversational affect modeling has advanced rapidly, most benchmarks remain speaker-centric and underrepresent coupled, time-evolving processes between partners, including directional influence, conversational timing coordination, and rapport development. To address this gap, ACII-DaiKon presents three coordinated sub-challenges built on a shared dataset: (1) directional interpersonal influence prediction, (2) turn-taking prediction (next-speaker and time-to-next-speech), and (3) rapport trajectory prediction across full interactions. The challenge is built on the Hume-DaiKon dataset, comprising 945 dyadic conversations (743.4 hours of audiovisual data) collected under naturalistic conditions across five languages. The benchmark supports multimodal modeling, temporal reasoning, and cross-context generalization through fixed train/validation/test splits, standardized metrics, and released baseline systems. Evaluation uses Concordance Correlation Coefficient (CCC), Pearson correlation, Macro-F1, and Mean Absolute Error (MAE) depending on the sub-challenge. Baseline experiments establish initial reference performance, with best test results of 0.40 CCC and 0.50 Pearson for influence prediction, 0.66 Macro-F1 and 1.50~s MAE for turn-taking, and 0.68 CCC and 0.70 Pearson for rapport trajectory modeling. These results indicate that while current methods capture coarse dyadic patterns, robust modeling of directional dependence and long-horizon interpersonal dynamics remains challenging. The workshop provides a shared platform for rigorous comparison and cross-disciplinary discussion on data validity, evaluation protocols, and culturally aware modeling for dyadic interaction.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.