Self-Discovered Intention-aware Transformer for Multi-modal Vehicle Trajectory Prediction
Diyi Liu, Zihan Niu, Tu Xu, Lishan Sun
TLDR
This paper introduces a pure Transformer for multi-modal vehicle trajectory prediction, using a two-track design for intentions and trajectories.
Key contributions
- Proposes a pure Transformer network for multi-modal vehicle trajectory prediction.
- Employs a two-track design: one for trajectory prediction, one for self-discovered intention likelihood.
- Separating spatial and trajectory generation modules significantly boosts prediction performance.
- Learns ordered trajectory groups by predicting residual offsets among K trajectories.
Why it matters
This paper addresses limitations of existing vehicle trajectory prediction methods by offering a flexible, pure Transformer-based solution. Its novel two-track architecture and self-discovered intention awareness improve prediction accuracy and adaptability for autonomous driving.
Original Abstract
Predicting vehicle trajectories plays an important role in autonomous driving and ITS applications. Although multiple deep learning algorithms are devised to predict vehicle trajectories, their reliant on specific graph structure (e.g., Graph Neural Network) or explicit intention labeling limit their flexibilities. In this study, we propose a pure Transformer-based network with multiple modals considering their neighboring vehicles. Two separate tracks are employed. One track focuses on predicting the trajectories while the other focuses on predicting the likelihood of each intention considering neighboring vehicles. Study finds that the two track design can increase the performance by separating spatial module from the trajectory generating module. Also, we find the the model can learn an ordered group of trajectories by predicting residual offsets among K trajectories.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.