ArXiv TLDR

Beyond Long Tail POIs: Transition-Centered Generalization for Human Mobility Prediction

🐦 Tweet
2605.05771

Dingyang Lyu, Zhengjia Xu, Jey Han Lau, Jianzhong Qi

cs.IR

TLDR

RECAP improves human mobility prediction by addressing transition-level sparsity, reconstructing rare POI transitions for better generalization.

Key contributions

  • Identifies transition-level sparsity, not just long-tail POIs, as a core bottleneck in mobility prediction.
  • Proposes RECAP, a framework reconstructing rare transitions using multi-hop transitivity and revisit evidence.
  • Employs warm-transition holdout training to prevent memorization and enhance generalization to unseen transitions.
  • Achieves significant accuracy gains, particularly for previously hard-to-predict tail transitions.

Why it matters

Current mobility prediction struggles with rare transitions, even for popular POIs. This paper introduces a novel framework, RECAP, that effectively generalizes to these challenging scenarios. Its approach offers a path to more robust and accurate next-POI forecasting, critical for many real-world applications.

Original Abstract

Human mobility prediction forecasts a user's next Point of Interest (POI) from historical trajectories, supporting applications from recommendation to urban planning. Recent studies have recognized the problem with long-tail POIs in human mobility prediction, which are POIs with few visit records, making new visits to such POIs difficult to predict. Our analysis shows that many predictions fail even for visits to popular POIs. The underlying cause is often transition-level sparsity: the corresponding source-destination transition appears rarely, or never appears, in the training set. We therefore argue that a core bottleneck in human mobility prediction lies in transition-level long-tail generalization. We formulate this problem as compositional generalization and propose a tRansition rEconstruction framework for Compositional generAlization in next-POI prediction (RECAP). RECAP reconstructs long-tail transitions from two generalizable signals: multi-hop transitivity in the global transition graph and revisit evidence from a user's historical trajectory. It further uses warm-transition holdout training to discourage memorization of frequent transitions and encourage generalization from transferable signals. Experiments on multiple real-world datasets show that RECAP consistently improves prediction accuracy, with clear gains on tail transitions.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.