ArXiv TLDR

OTSS: Output-Targeted Soft Segmentation for Contextual Decision-Weight Learning

🐦 Tweet
2605.00193

Renjun Hu, Hyun-Soo Ahn

cs.LGstat.ML

TLDR

OTSS is a novel soft segmentation model that learns context-specific decision weights, achieving lower regret and faster performance in ML systems.

Key contributions

  • Introduces OTSS for learning personalized decision-ready weight vectors in contextual settings.
  • Theoretically demonstrates soft segmentation avoids approximation floors and attains parametric rates.
  • Achieves lowest mean regret and is 2 orders of magnitude faster than EM mixture regression.
  • Outperforms comparators on controlled benchmarks and a real-world retail dataset.

Why it matters

This paper addresses a key limitation in ML systems by enabling the learning of dynamic, context-specific decision weights. OTSS offers a more adaptive and efficient approach to constrained decision-making. Its superior performance and speed could significantly impact real-world applications requiring personalized optimization.

Original Abstract

Many machine learning systems make constrained decisions by optimizing factorized objectives, but the context-specific objective is often treated as fixed. We study contextual decision-weight learning: from logged decisions and proxy outputs, learn an optimizer-facing weight vector w(x) over interpretable decision factors z(x,d), rather than a direct policy or generic predictive score. We propose OTSS, an output-targeted soft-segmentation model that deploys the personalized decision-ready weight vector. At the function-class level, the theory highlights a hard-versus-soft distinction. Hard partitions incur an approximation-estimation tradeoff under overlap, while a realizable fixed-K soft class removes the hard-partition approximation floor and attains a parametric rate. We evaluate OTSS in controlled benchmarks with finite evaluation libraries, where the true weight vector and downstream regret can be computed exactly. In the representative overlap setting, OTSS attains the lowest mean regret among the comparators, including EM mixture regression, the strongest soft-mixture baseline in our comparison; it matches EM on coefficient recovery while running about two orders of magnitude faster. In a matched K=5 benchmark, OTSS remains competitive under hard-routed truth and improves as heterogeneity becomes softer and sample size grows. On a fixed Complete Journey retail anchor with real household covariates and action geometry, OTSS again achieves the lowest mean-regret point estimate.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.