ArXiv TLDR

Decision-Focused Federated Learning Under Heterogeneous Objectives and Constraints

🐦 Tweet
2604.20031

Konstantinos Ziliaskopoulos, Alexander Vinel

math.OCcs.LGstat.ML

TLDR

This paper introduces Decision-Focused Federated Learning (DFFL) for predict-then-optimize problems with diverse client objectives and constraints.

Key contributions

  • Introduces Decision-Focused Federated Learning (DFFL) for predict-then-optimize with diverse clients.
  • Develops heterogeneity bounds for SPO+ loss, separating objective and feasible-set shifts.
  • Derives sharper bounds for strongly convex regions due to optimizer stability.
  • Shows FedAvg-style DFFL is robust in strongly convex settings, but degrades with constraint heterogeneity.

Why it matters

This work enables federated learning for complex decision-making problems where clients have different goals and constraints, without sharing sensitive data. It provides theoretical insights and practical guidance on when federation is beneficial, especially for strongly convex problems.

Original Abstract

We consider what we refer to as {Decision-Focused Federated Learning (DFFL)} framework, i.e., a predict-then-optimize approach employed by a collection of agents, where each agent's predictive model is an input to a downstream linear optimization problem, and no direct exchange of raw data is allowed. Importantly, clients can differ both in objective functions and in feasibility constraints. We build on the well-known SPO+ approach and develop heterogeneity bounds for the SPO+ surrogate loss in this case. This is accomplished by employing a support function representation of the feasible region, separating (i) objective shift via norm distances between the cost vectors and (ii) feasible-set shift via shape distances between the constraint sets. In the case of strongly convex feasible regions, sharper bounds are derived due to the optimizer stability. Building on these results, we define a heuristic local-versus-federated excess risk decision rule which, under SPO+ risk, gives a condition for when federation can be expected to improve decision quality: the heterogeneity penalty must be smaller than the statistical advantage of pooling data. We implement a FedAvg-style DFFL set of experiments on both polyhedral and strongly convex problems and show that federation is broadly robust in the strongly convex setting, while performance in the polyhedral setting degrades primarily with constraint heterogeneity, especially for clients with many samples. In other words, especially for the strongly convex case, an approach following a direct implementation of FedAvg and SPO+ can still yield promising performance even when the downstream optimization problems are noticeably different.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.