ArXiv TLDR

Fair Agents: Balancing Multistakeholder Alignment in Multi-Agent Personalization Systems

🐦 Tweet
2605.02379

Andrea Forster, Peter Müllner, Denis Helic, Elisabeth Lex, Dominik Kowald

cs.IR

TLDR

This paper proposes a conceptual framework for designing fair multi-agent personalization systems that balance competing stakeholder objectives using LLM agents.

Key contributions

  • Methods to align stakeholder objectives with LLM agents.
  • Aggregation strategies, like social choice theory, for fair collective decisions.
  • Stakeholder-centric evaluation for individual and collective agent behavior.
  • Showcases framework with a tourism use case and discusses other applications.

Why it matters

LLM agents are increasingly used for personalization, often with multiple stakeholders and competing goals. This paper provides a crucial framework to ensure fairness and balance these objectives in multi-agent systems. It addresses key challenges in aligning agents and aggregating decisions.

Original Abstract

LLM agents are increasingly used for personalization due to their ability to communicate directly with users in natural language, integrate external knowledge bases, and negotiate with other (possibly human) agents. Especially in multistakeholder AI systems with multiple distinct objectives, LLM agents are used to independently optimize for each stakeholder's goals. Here, stakeholder alignment is essential to identify and map these goals to provide LLM agents with quantifiable objectives. Plus, the way in which the outputs of the LLM agents are aggregated is fundamental to ensuring fair outcomes for all agents and, therefore, stakeholders. In this work, we identify open research challenges and propose a conceptual framework for designing fair multi-agent multistakeholder personalization systems that balance competing stakeholder objectives. Our framework integrates (i) methods to align stakeholder objectives and LLM agents, (ii) aggregation strategies, e.g., based on social choice theory, to form fair collective decisions, and (iii) stakeholder-centric evaluation procedures for both individual and collective agent behavior. We showcase our framework through a tourism use case and discuss possible applications in other domains, such as education and healthcare. Finally, we discuss domain-specific fairness tensions and review datasets for evaluating multistakeholder fairness and multi-agent personalization systems.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.