Algorithmic Feature Highlighting for Human-AI Decision-Making
TLDR
This paper explores algorithmic feature highlighting for human-AI decision-making, addressing computational tractability and human interpretation.
Key contributions
- Models feature highlighting as a constrained information policy for human-AI decision-making.
- Shows optimizing feature highlighting for sophisticated human agents is computationally intractable.
- Demonstrates optimizing for naive human agents is tractable when bandwidth is fixed.
- Reveals policies optimal for sophisticated agents perform poorly with naive ones, urging robust alternatives.
Why it matters
This research is crucial for designing effective human-AI collaboration tools, considering human cognitive limits. It reveals complexities in how humans interpret AI-highlighted features, offering a framework for practical, robust algorithms. These findings are vital for improving real-world decision support systems.
Original Abstract
Human decision-makers often face choices about complex cases with many potentially relevant features, but limited bandwidth to inspect and integrate all available information. In such settings, we study algorithms that highlight a small subset of case-specific features for human consideration, rather than producing a single prediction or recommendation. We model highlighting as a constrained information policy that selects a small number of features to reveal. A central issue is how humans interpret the algorithm's choice of features: a sophisticated agent correctly conditions on the selection rule, while a naive agent updates only on revealed feature values and treats the selection event as exogenous. We show that optimizing highlighting for sophisticated agents can be computationally intractable, even in simple discrete and binary settings, whereas optimizing for naive agents is tractable as long as the maximal bandwidth is fixed. We also show that a highlighting policy that is optimal for sophisticated agents can perform arbitrarily poorly when deployed to naive agents, motivating robust, implementable alternatives. We illustrate our framework in a calibrated empirical exercise based on the American Housing Survey. Overall, our results establish the value of highlighting a context-specific set of features rather than a fixed one as a practically appealing and computationally feasible tool for achieving human-algorithm complementarity.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.