Offline Evaluation Measures of Fairness in Recommender Systems
TLDR
This paper analyzes and improves offline fairness evaluation measures in recommender systems, addressing their limitations and proposing new approaches.
Key contributions
- Identifies and analyzes theoretical, empirical, and conceptual limitations of existing fairness measures.
- Proposes novel evaluation approaches and measures to overcome identified limitations.
- Provides guidelines for appropriate selection and usage of fairness evaluation measures.
Why it matters
Fairness in AI is critical, driven by recent legislation. Existing recommender system fairness measures have significant limitations, hindering their effective use. This work provides crucial analysis and new tools to accurately evaluate and improve fairness in recommender systems.
Original Abstract
The evaluation of recommender system fairness has become increasingly important, especially with recent legislation that emphasises the development of fair and responsible artificial intelligence. This has led to the emergence of various fairness evaluation measures, which quantify fairness based on different definitions. However, many of such measures are simply proposed and used without further analysis on their robustness. As a result, there is insufficient understanding and awareness of the measures' limitations. Among other issues, it is not known what kind of model outputs produce the (un)fairest score, how the measure scores are empirically distributed, and whether there are cases where the measures cannot be computed (e.g., due to division by zero). These issues cause difficulty in interpreting the measure scores and confusion on which measure(s) should be used for a specific case. This thesis presents a series of papers that assess and overcome various theoretical, empirical, and conceptual limitations of existing recommender system fairness evaluation measures. We investigate a wide range of offline evaluation measures for different fairness notions, divided based on the evaluation subjects (users and items) and for different evaluation granularities (groups of subjects and individual subjects). Firstly, we perform theoretical and empirical analysis on the measures, exposing flaws that limit their interpretability, expressiveness, or applicability. Secondly, we contribute novel evaluation approaches and measures that overcome these limitations. Finally, considering the measures' limitations, we recommend guidelines for the appropriate measure usage, thereby allowing for more precise selection of fairness evaluation measures in practical scenarios. Overall, this thesis contributes to advancing the state-of-the-art offline evaluation of fairness in recommender systems.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.