ArXiv TLDR

First-See-Then-Design: A Multi-Stakeholder View for Optimal Performance-Fairness Trade-Offs

🐦 Tweet
2604.14035

Kavya Gupta, Nektarios Kalampalikis, Christoph Heitz, Isabel Valera

cs.LGcs.AI

TLDR

This paper introduces a multi-stakeholder framework for fair algorithmic decision-making, moving beyond prediction-centric views to optimize performance-fairness trade-offs.

Key contributions

  • Introduces a multi-stakeholder framework for fair algorithmic decision-making, modeling DM and DS utilities.
  • Defines fairness via a social planner's utility, capturing group inequalities under justice notions.
  • Formulates fair decision-making as a post-hoc multi-objective optimization problem.
  • Demonstrates stochastic policies can achieve superior performance-fairness trade-offs.

Why it matters

This paper shifts the focus from prediction-centric fairness to a more comprehensive, justice-based, multi-stakeholder approach. It provides a framework to explicitly model utilities of all parties, leading to better performance-fairness trade-offs, especially with stochastic policies. This enables a more transparent and collaborative design of fair decision-making systems.

Original Abstract

Fairness in algorithmic decision-making is often defined in the predictive space, where predictive performance - used as a proxy for decision-maker (DM) utility - is traded off against prediction-based fairness notions, such as demographic parity or equality of opportunity. This perspective, however, ignores how predictions translate into decisions and ultimately into utilities and welfare for both DM and decision subjects (DS), as well as their allocation across social-salient groups. In this paper, we propose a multi-stakeholder framework for fair algorithmic decision-making grounded in welfare economics and distributive justice, explicitly modeling the utilities of both the DM and DS, and defining fairness via a social planner's utility that captures inequalities in DS utilities across groups under different justice-based fairness notions (e.g., Egalitarian, Rawlsian). We formulate fair decision-making as a post-hoc multi-objective optimization problem, characterizing the achievable performance-fairness trade-offs in the two-dimensional utility space of DM utility and the social planner's utility, under different decision policy classes (deterministic vs. stochastic, shared vs. group-specific). Using the proposed framework, we then identify conditions (in terms of the stakeholders' utilities) under which stochastic policies are more optimal than deterministic ones, and empirically demonstrate that simple stochastic policies can yield superior performance-fairness trade-offs by leveraging outcome uncertainty. Overall, we advocate a shift from prediction-centric fairness to a transparent, justice-based, multi-stakeholder approach that supports the collaborative design of decision-making policies.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.