Action-Aware Generative Sequence Modeling for Short Video Recommendation
Wenhao Li, Zihan Lin, Zhengxiao Guo, Jie Zhou, Shukai Liu + 5 more
TLDR
A2Gen improves short video recommendations by modeling user actions as temporal sequences, leading to significant engagement boosts.
Key contributions
- Introduces A2Gen, an Action-Aware Generative Sequence Network for nuanced short video recommendation.
- Utilizes a Context-aware Attention Module (CAM) to model action sequences with item-specific features.
- Develops a Hierarchical Sequence Encoder (HSE) to learn temporal action patterns from user history.
- Achieved significant online improvements in watch time, interaction rate, and user retention on Kuaishou.
Why it matters
Traditional models struggle with diverse short video segments and temporal user preferences. This paper's A2Gen framework addresses this by modeling actions as sequences, leading to more accurate and engaging recommendations. Its successful deployment on Kuaishou, serving 400M+ users daily, demonstrates its significant real-world impact on user engagement and retention.
Original Abstract
With the rapid development of the Internet, users have increasingly higher expectations for the recommendation accuracy of online content consumption platforms. However, short videos often contain diverse segments, and users may not hold the same attitude toward all of them. Traditional binary-classification recommendation models, which treat a video as a single holistic entity, face limitations in accurately capturing such nuanced preferences. Considering that user consumption is a temporal process, this paper demonstrates that the timing of user actions can represent diverse intentions through statistical analysis and examination of action patterns. Based on this insight, we propose a novel modeling paradigm: Action-Aware Generative Sequence Network (A2Gen), which refines user actions along the temporal dimension and chains them into sequences for unified processing and prediction. First, we introduce the Context-aware Attention Module (CAM) to model action sequences enriched with item-specific contextual features. Building upon this, we develop the Hierarchical Sequence Encoder (HSE) to learn temporal action patterns from users' historical actions. Finally, through leveraging CAM, we design a module for action sequence generation: the Action-seq Autoregressive Generator (AAG). Extensive offline experiments on the Kuaishou's dataset and the Tmall public dataset demonstrate the superiority of our proposed model. Furthermore, through large-scale online A/B testing deployed on Kuaishou's platform, our model achieves significant improvements over baseline methods in multi-task prediction by leveraging sequential information. Specifically, it yields increases of 0.34% in user watch time, 8.1% in interaction rate, and 0.162% in overall user retention (LifeTime-7), leading to successful deployment across all traffic, serving over 400 million users every day.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.