Domain-Adapted Retrieval for In-Context Annotation of Pedagogical Dialogue Acts
Jinsook Lee, Kirk Vanacore, Zhuqian Zhou, Bakhtawar Ahtisham, Rene F. Kizilcec
TLDR
A domain-adapted RAG pipeline significantly improves pedagogical dialogue act annotation by fine-tuning retrieval, outperforming baselines.
Key contributions
- Introduces a domain-adapted RAG pipeline for annotating pedagogical dialogue acts.
- Achieves strong performance (κ 0.526-0.743) on two datasets with various LLMs.
- Adapts retrieval by fine-tuning an embedding model and using utterance-level indexing.
- Utterance-level indexing significantly boosts label match rates and corrects systematic biases.
Why it matters
LLMs often fail at pedagogical dialogue annotation without domain grounding. This paper provides a practical and effective path to expert-level annotation by adapting the retrieval component, keeping the generative model frozen. This is crucial for high-stakes educational applications.
Original Abstract
Automated annotation of pedagogical dialogue is a high-stakes task where LLMs often fail without sufficient domain grounding. We present a domain-adapted RAG pipeline for tutoring move annotation. Rather than fine-tuning the generative model, we adapt retrieval by fine-tuning a lightweight embedding model on tutoring corpora and indexing dialogues at the utterance level to retrieve labeled few-shot demonstrations. Evaluated across two real tutoring dialogue datasets (TalkMoves and Eedi) and three LLM backbones (GPT-5.2, Claude Sonnet 4.6, Qwen3-32b), our best configuration achieves Cohen's $κ$ of 0.526-0.580 on TalkMoves and 0.659-0.743 on Eedi, substantially outperforming no-retrieval baselines ($κ= 0.275$-$0.413$ and $0.160$-$0.410$). An ablation study reveals that utterance-level indexing, rather than embedding quality alone, is the primary driver of these gains, with top-1 label match rates improving from 39.7\% to 62.0\% on TalkMoves and 52.9\% to 73.1\% on Eedi under domain-adapted retrieval. Retrieval also corrects systematic label biases present in zero-shot prompting and yields the largest improvements for rare and context-dependent labels. These findings suggest that adapting the retrieval component alone is a practical and effective path toward expert-level pedagogical dialogue annotation while keeping the generative model frozen.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.