On Reasoning Behind Next Occupation Recommendation
Shan Dong, Palakorn Achananuparp, Hieu Hien Mai, Lei Wang, Yao Lu + 1 more
TLDR
This paper introduces a two-step LLM-based reasoning approach for next occupation prediction, fine-tuned with oracle reasons for enhanced accuracy.
Key contributions
- Proposes a novel two-step LLM reasoning approach for next occupation prediction using a reason generator.
- Fine-tunes LLMs with high-quality "oracle reasons" derived via LLM-as-a-Judge for improved performance.
- Achieves next occupation prediction accuracy comparable to fully supervised methods, outperforming unsupervised.
- Demonstrates a single LLM fine-tuned for both reason generation and prediction outperforms separate models.
Why it matters
This work significantly advances LLM capabilities for career path prediction by introducing a novel reasoning framework. It demonstrates how fine-tuning with high-quality "oracle reasons" can overcome LLM alignment issues, making predictions comparable to supervised methods. This highlights the critical role of explicit reasoning in enhancing predictive accuracy for complex, real-world applications.
Original Abstract
In this work, we develop a novel reasoning approach to enhance the performance of large language models (LLMs) in future occupation prediction. In this approach, a reason generator first derives a ``reason'' for a user using his/her past education and career history. The reason summarizes the user's preference and is used as the input of an occupation predictor to recommend the user's next occupation. This two-step occupation prediction approach is, however, non-trivial as LLMs are not aligned with career paths or the unobserved reasons behind each occupation decision. We therefore propose to fine-tune LLMs improving their reasoning and occupation prediction performance. We first derive high-quality oracle reasons, as measured by factuality, coherence and utility criteria, using a LLM-as-a-Judge. These oracle reasons are then used to fine-tune small LLMs to perform reason generation and next occupation prediction. Our extensive experiments show that: (a) our approach effectively enhances LLM's accuracy in next occupation prediction making them comparable to fully supervised methods and outperforming unsupervised methods; (b) a single LLM fine-tuned to perform reason generation and occupation prediction outperforms two LLMs fine-tuned to perform the tasks separately; and (c) the next occupation prediction accuracy depends on the quality of generated reasons. Our code is available at https://github.com/Sarasarahhhhh/job_prediction.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.