ArXiv TLDR

Transition-Matrix Regularization for Next Dialogue Act Prediction in Counselling Conversations

🐦 Tweet
2604.18539

Eric Rudolph, Philipp Steigerwald, Jens Albrecht

cs.CLcs.AI

TLDR

This paper introduces KL regularization for Next Dialogue Act Prediction, improving performance and dialogue-flow alignment in counselling conversations.

Key contributions

  • Proposes KL regularization to incorporate dialogue-flow statistics into Next Dialogue Act Prediction (NDAP).
  • Aligns predicted act distributions with corpus-derived transition patterns for improved coherence.
  • Achieves 9-42% relative macro-F1 improvement on a 60-class German counselling dataset.
  • Demonstrates transferability across languages and domains, particularly benefiting weaker baseline models.

Why it matters

This research offers a simple yet effective method to enhance dialogue act prediction, especially in data-sparse scenarios. By leveraging empirical dialogue-flow priors, it improves model performance and discourse coherence, making AI-driven counselling tools more reliable.

Original Abstract

This paper studies how empirical dialogue-flow statistics can be incorporated into Next Dialogue Act Prediction (NDAP). A KL regularization term is proposed that aligns predicted act distributions with corpus-derived transition patterns. Evaluated on a 60-class German counselling taxonomy using 5-fold cross-validation, this improves macro-F1 by 9--42% relative depending on encoder and substantially improves dialogue-flow alignment. Cross-dataset validation on HOPE suggests that improvements transfer across languages and counselling domains. In systematic ablations across pretrained encoders and architectures, the findings indicate that transition regularization provides consistent gains and disproportionately benefits weaker baseline models. The results suggest that lightweight discourse-flow priors complement pretrained encoders, especially in fine-grained, data-sparse dialogue tasks.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.