Path-Sampled Integrated Gradients
Firuz Kamalov, Fadi Thabtah, R. Sivaraj, Neda Abdelhamid
TLDR
Path-Sampled Integrated Gradients (PS-IG) is a new framework for feature attribution that improves error convergence and reduces gradient noise.
Key contributions
- Introduces Path-Sampleed Integrated Gradients (PS-IG) for generalized feature attribution.
- Proves equivalence to path-weighted IG, enabling deterministic Riemann sum evaluation.
- Improves error convergence rate from O(m^-1/2) to O(m^-1) for smooth models.
- Reduces attribution variance by 1/3 under uniform sampling, preserving key properties.
Why it matters
This paper introduces a more robust and efficient method for feature attribution. By reducing noise and improving convergence, PS-IG enhances the reliability and speed of interpreting complex models. This is vital for trustworthy AI.
Original Abstract
We introduce path-sampled integrated gradients (PS-IG), a framework that generalizes feature attribution by computing the expected value over baselines sampled along the linear interpolation path. We prove that PS-IG is mathematically equivalent to path-weighted integrated gradients, provided the weighting function matches the cumulative distribution function of the sampling density. This equivalence allows the stochastic expectation to be evaluated via a deterministic Riemann sum, improving the error convergence rate from $O(m^{-1/2})$ to $O(m^{-1})$ for smooth models. Furthermore, we demonstrate analytically that PS-IG functions as a variance-reducing filter against gradient noise - strictly lowering attribution variance by a factor of 1/3 under uniform sampling - while preserving key axiomatic properties such as linearity and implementation invariance.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.