Towards Position-Robust Talent Recommendation via Large Language Models
TLDR
L3TR is a new LLM-based framework for listwise talent recommendation that mitigates position bias and reduces token consumption for better results.
Key contributions
- Proposes an implicit strategy to leverage LLM potential output for recommendation tasks.
- Develops block attention and local positional encoding to enhance inter-document processing and mitigate position/token bias.
- Introduces an ID sampling method to resolve candidate set size inconsistency between training and inference.
- Designs evaluation methods for position/token bias and proposes training-free debiasing techniques.
Why it matters
Talent recruitment is costly, and existing LLM systems for it suffer from position bias and high token consumption. This paper introduces L3TR, a novel framework that addresses these issues through listwise processing. It leads to more efficient and accurate talent matching, significantly improving LLM utility in recruitment.
Original Abstract
Talent recruitment is a critical, yet costly process for many industries, with high recruitment costs and long hiring cycles. Existing talent recommendation systems increasingly adopt large language models (LLMs) due to their remarkable language understanding capabilities. However, most prior approaches follow a pointwise paradigm, which requires LLMs to repeatedly process some text and fails to capture the relationships among candidates in the list, resulting in higher token consumption and suboptimal recommendations. Besides, LLMs exhibit position bias and the lost-in-the-middle issue when answering multiple-choice questions and processing multiple long documents. To address these issues, we introduce an implicit strategy to utilize LLM's potential output for the recommendation task and propose L3TR, a novel framework for listwise talent recommendation with LLMs. In this framework, we propose a block attention mechanism and a local positional encoding method to enhance inter-document processing and mitigate the position bias and concurrent token bias issue. We also introduce an ID sampling method for resolving the inconsistency between candidate set sizes in the training phase and the inference phase. We design evaluation methods to detect position bias and token bias and training-free debiasing methods. Extensive experiments on two real-world datasets validated the effectiveness of L3TR, showing consistent improvements over existing baselines.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.