ArXiv TLDR

Ceci n'est pas une explication: Evaluating Explanation Failures as Explainability Pitfalls in Language Learning Systems

🐦 Tweet
2604.26145

Ben Knight, Wm. Matthew Kennedy, James Edgell

cs.HCcs.AI

TLDR

This paper identifies and analyzes "explainability pitfalls" in AI language learning feedback, introducing L2-Bench to evaluate their impact on learners.

Key contributions

  • Identifies "explainability pitfalls" where AI explanations appear helpful but are fundamentally flawed.
  • Introduces L2-Bench, a benchmark evaluating AI feedback across six critical dimensions.
  • Analyzes how AI systems fail in diagnostic accuracy, appropriacy, error causes, and guidance.
  • Highlights amplified risks in language learning and outlines open questions for evaluation design.

Why it matters

This paper is crucial as flawed AI feedback in language learning can reinforce misconceptions and erode learning outcomes. By identifying "explainability pitfalls" and introducing L2-Bench, it helps developers design safer, more trustworthy, and effective AI explanations. This improves human-AI interaction and prevents educational harms.

Original Abstract

AI-powered language learning tools increasingly provide instant, personalised feedback to millions of learners worldwide. However, this feedback can fail in ways that are difficult for learners--and even teachers--to detect, potentially reinforcing misconceptions and eroding learning outcomes over extended use. We present a portion of L2-Bench, a benchmark for evaluating AI systems in language education that includes (but is not limited to) six critical dimensions of effective feedback: diagnostic accuracy, awareness of appropriacy, causes of error, prioritisation, guidance for improvement, and supporting self-regulation. We analyse how AI systems can fail with respect to these dimensions. These failures, which we argue are conducive to "explainability pitfalls," are AI-generated explanations that appear helpful on the surface but are fundamentally flawed, increasing the risk of attainment, human-AI interaction, and socioaffective harms. We discuss how the specific context of language learning amplifies these risks and outline open questions we believe merit more attention when designing evaluation frameworks specifically. Our analysis aims to expand the community's understanding of both the typology of explainability pitfalls and the contextual dynamics in which they may occur in order to encourage AI developers to better design safe, trustworthy, and effective AI explanations.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.