SHIFT: Robust Double Machine Learning for Average Dose-Response Functions under Heavy-Tailed Contamination
TLDR
SHIFT introduces a robust Double Machine Learning method for Average Dose-Response Functions, significantly improving outlier handling under heavy-tailed contamination.
Key contributions
- Introduces SHIFT, a robust DML estimator for Average Dose-Response Functions, resilient to heavy-tailed contamination.
- Achieves significant RMSE reduction (1.03 to 0.33) on localized contamination by scaling inlier cutoff with post-GNC residual MAD.
- Uniquely recovers ground-truth outlier masks with high F1 scores (≈0.96), distinguishing it from other robust methods.
- Provides an Extreme Value Theory diagnostic suite to guide practitioners in choosing optimal robust methods.
Why it matters
Traditional Double Machine Learning for dose-response functions is highly susceptible to outliers, leading to biased results. SHIFT offers a robust solution, significantly improving accuracy under heavy-tailed data. This enables more reliable causal inference in real-world applications by effectively handling data contamination.
Original Abstract
Double-machine-learning pipelines for the Average Dose-Response Function rely on kernel-weighted local-linear smoothers, which inherit unbounded functional influence: a single outlier within a kernel window biases the curve across the entire window. We introduce SHIFT (Self-calibrated Heavy-tail Inlier-Fit with Tempering), a robust DML estimator combining cross-fit nuisance orthogonalization with a kernel-local Welsch-loss second stage optimized by Graduated Non-Convexity, and -- the principal design choice -- a defensive OLS refit whose inlier cutoff is scaled by post-GNC residual MAD rather than the raw-outcome MAD. On a localized-contamination stress test at $p=0.25$ this design choice drops level-RMSE from 1.03 to 0.33 while leaving clean and uniformly-contaminated runs unchanged. Across 1,400 main-sweep fits, SHIFT has competitive worst-case shape recovery (RMSE $0.325$ at $p=0.25$, second to Huber-DML's $0.276$); among the three methods with worst-case RMSE below $0.35$, only SHIFT emits a non-uniform per-sample weight vector, recovering the ground-truth outlier mask at mean $F_1 \approx 0.96$ (range $0.945$--$0.968$) on Gaussian-jump DGPs. We pair the estimator with a six-technique Extreme Value Theory diagnostic suite (Hill, GPD-MLE/PWM, GEV, Mean Excess, parameter stability, causal tail coefficient) that lets a practitioner distinguish Frechet from Weibull regimes and choose between SHIFT and L1 alternatives on empirical grounds. Extensions to binary-treatment CATE (Huber pseudo-outcome X-Learner) and time-series ADRF (block-CV + rolling MAD) are included. A counter-intuitive ablation: linear nuisance models (Ridge, Lasso) outperform gradient-boosted nuisances for robust DML under uniform contamination, inverting the usual more-flexible-is-better heuristic.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.