Semiparametric Efficient Test for Interpretable Distributional Treatment Effects
Houssam Zenati, Arthur Gretton
TLDR
DR-ME is a new semiparametrically efficient test that identifies specific locations where treatment effects alter outcome distributions, unlike global tests.
Key contributions
- Introduces DR-ME, a semiparametrically efficient finite-location test for interpretable distributional treatment effects.
- Identifies specific causal-discrepancy coordinates by evaluating an interventional kernel witness at learned outcome locations.
- Derives orthogonal doubly robust kernel features, achieving chi-square calibration and optimal local power.
- Employs a principled location-learning criterion with sample splitting for valid post-selection inference.
Why it matters
This paper addresses a critical limitation of existing causal inference methods by pinpointing *where* treatment effects manifest in outcome distributions, rather than just detecting their presence. This interpretability is crucial for understanding complex interventions and guiding targeted policy decisions, especially in fields like medicine.
Original Abstract
Distributional treatment effects can be invisible to means: a treatment may preserve average outcomes while changing tails, modes, dispersion, or rare-event probabilities. Kernel tests can detect discrepancies between interventional outcome laws, but global tests do not reveal where the laws differ. We propose DR-ME, to our knowledge the first semiparametrically efficient finite-location test for interpretable distributional treatment effects. DR-ME evaluates an interventional kernel witness at learned outcome locations, returning causal-discrepancy coordinates rather than only a global rejection. From observational data, we derive orthogonal doubly robust kernel features whose centered oracle form is the canonical gradient of this finite witness. For fixed locations, we characterize the local testing limit: DR-ME is chi-square calibrated under the null, has noncentral chi-square local power, and uses the covariance whitening that optimizes local signal-to-noise for discrepancies visible through the selected coordinates. This efficient local-power geometry yields a principled location-learning criterion, with sample splitting preserving post-selection validity. Experiments show near-nominal type-I error, competitive power against global doubly robust kernel tests, and interpretable learned locations that localize distributional effects in a semi-synthetic medical-imaging study.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.