Understanding DNNs in Feature Interaction Models: A Dimensional Collapse Perspective
Jiancheng Wang, Mingjia Yin, Hao Wang, Enhong Chen
TLDR
This paper shows DNNs in feature interaction models mitigate dimensional collapse, improving representation robustness and clarifying their role.
Key contributions
- Presents a novel perspective: DNNs improve dimensional robustness in feature interaction models.
- Demonstrates parallel and stacked DNNs effectively mitigate embedding dimensional collapse.
- Provides gradient-based theoretical analysis and empirical evidence for collapse mechanisms.
Why it matters
This paper clarifies the debated role of DNNs in feature interaction models by introducing a dimensional collapse perspective. It shows how DNNs enhance representation robustness, offering insights into their effectiveness and guiding future model design.
Original Abstract
DNNs have gained widespread adoption in feature interaction recommendation models. However, there has been a longstanding debate on their roles. On one hand, some works claim that DNNs possess the ability to implicitly capture high-order feature interactions. Conversely, recent studies have highlighted the limitations of DNNs in effectively learning dot products, specifically second-order interactions, let alone higher-order interactions. In this paper, we present a novel perspective to understand the effectiveness of DNNs: their impact on the dimensional robustness of the representations. In particular, we conduct extensive experiments involving both parallel DNNs and stacked DNNs. Our evaluation encompasses an overall study of complete DNN on two feature interaction models, alongside a fine-grained ablation analysis of components within DNNs. Experimental results demonstrate that both parallel and stacked DNNs can effectively mitigate the dimensional collapse of embeddings. Furthermore, a gradient-based theoretical analysis, supported by empirical evidence, uncovers the underlying mechanisms of dimensional collapse.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.