ArXiv TLDR

Uncertainty-aware Generative Learning Path Recommendation with Cognition-Adaptive Diffusion

🐦 Tweet
2604.14613

Xiangrui Xiong, Hang Liang, Baiyang Chen, Zifei Pan, Yanli Lee

cs.IRcs.AI

TLDR

U-GLAD introduces an uncertainty-aware generative diffusion model for personalized learning path recommendation, adapting to diverse goals and learner cognitive states.

Key contributions

  • Models learner cognitive states as probability distributions using a Gaussian LSTM to capture true understanding.
  • Utilizes a goal-oriented concept encoder with multi-head attention for highly personalized, goal-aligned recommendations.
  • Employs a generative diffusion model to predict optimal next concepts, moving beyond traditional discriminative ranking.
  • Significantly outperforms baselines on three public datasets, offering stable and goal-driven learning paths.

Why it matters

This paper addresses critical limitations in personalized education by accounting for learner uncertainty and diverse goals. U-GLAD's novel generative diffusion approach and cognition-adaptive modeling offer more robust and tailored learning paths. This could significantly improve educational technology.

Original Abstract

Learning Path Recommendation (LPR) is critical for personalized education, yet current methods often fail to account for historical interaction uncertainty (e.g., lucky guesses or accidental slips) and lack adaptability to diverse learning goals. We propose U-GLAD (Uncertainty-aware Generative Learning Path Recommendation with Cognition-Adaptive Diffusion). To address representation bias, the framework models cognitive states as probability distributions, capturing the learner's underlying true state via a Gaussian LSTM. To ensure highly personalized recommendation, a goal-oriented concept encoder utilizes multi-head attention and objective-specific transformations to dynamically align concept semantics with individual learning goals, generating uniquely tailored embeddings. Unlike traditional discriminative ranking approaches, our model employs a generative diffusion model to predict the latent representation of the next optimal concept. Extensive evaluations on three public datasets demonstrate that U-GLAD significantly outperforms representative baselines. Further analyses confirm its superior capability in perceiving interaction uncertainty and providing stable, goal-driven recommendation paths.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.