ArXiv TLDR

Locating acts of mechanistic reasoning in student team conversations with mechanistic machine learning

🐦 Tweet
2604.21870

Kaitlin Gili, Mainak Nistala, Kristen Wendell, Michael C. Hughes

physics.ed-phcs.LG

TLDR

An interpretable ML model identifies student mechanistic reasoning in team conversations, improving generalization with domain-aligned inductive biases.

Key contributions

  • Develops an interpretable ML model to detect student mechanistic reasoning in team conversations.
  • Outputs time-varying probabilities of individual students' mechanistic reasoning acts.
  • Introduces an inductive bias to align model dynamics with desired domain-specific behavior.
  • Demonstrates improved generalization to new students and contexts due to the inductive bias.

Why it matters

This paper provides STEM education researchers with a crucial tool to efficiently identify and analyze student mechanistic reasoning in team conversations. It addresses the challenge of sifting through vast transcripts, enabling deeper analysis. The work also advocates for developing interpretable and controllable ML models in education research.

Original Abstract

STEM education researchers are often interested in identifying moments of students' mechanistic reasoning for deeper analysis, but have limited capacity to search through many team conversation transcripts to find segments with a high concentration of such reasoning. We offer a solution in the form of an interpretable machine learning model that outputs time-varying probabilities that individual students are engaging in acts of mechanistic reasoning, leveraging evidence from their own utterances as well as contributions from the rest of the group. Using the toolkit of intentionally-designed probabilistic models, we introduce a specific inductive bias that steers the probabilistic dynamics toward desired, domain-aligned behavior. Experiments compare trained models with and without the inductive bias components, investigating whether their presence improves the desired model behavior on transcripts involving never-before-seen students and a novel discussion context. Our results show that the inductive bias improves generalization -- supporting the claim that interpretability is built into the model for this task rather than imposed post hoc. We conclude with practical recommendations for STEM education researchers seeking to adopt the tool and for ML researchers aiming to extend the model's design. Overall, we hope this work encourages the development of mechanistically interpretable models that are understandable and controllable for both end users and model designers in STEM education research.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.