Variational Inference for Lévy Process-Driven SDEs via Neural Tilting
Yaman Kindap, Manfred Opper, Benjamin Dupuis, Umut Simsekli, Tolga Birdal
TLDR
This paper introduces a neural exponential tilting framework for variational inference in Lévy-driven SDEs, addressing challenges in modeling extreme events.
Key contributions
- A novel neural exponential tilting framework for variational inference in Lévy-driven SDEs.
- Uses neural networks to exponentially reweight the Lévy measure, preserving jump structure.
- Achieves closed-form normalization via quadratic neural parametrization for efficiency.
- Employs conditional Gaussian representation and symmetry-aware Monte Carlo for scalable inference.
Why it matters
Modelling extreme events and heavy-tailed phenomena is critical for reliable predictive systems. Existing Bayesian inference methods for Lévy-driven SDEs are intractable or rely on limiting Gaussian assumptions. This work provides an efficient and accurate approach, enabling robust inference in domains like finance and climate science.
Original Abstract
Modelling extreme events and heavy-tailed phenomena is central to building reliable predictive systems in domains such as finance, climate science, and safety-critical AI. While Lévy processes provide a natural mathematical framework for capturing jumps and heavy tails, Bayesian inference for Lévy-driven stochastic differential equations (SDEs) remains intractable with existing methods: Monte Carlo approaches are rigorous but lack scalability, whereas neural variational inference methods are efficient but rely on Gaussian assumptions that fail to capture discontinuities. We address this tension by introducing a neural exponential tilting framework for variational inference in Lévy-driven SDEs. Our approach constructs a flexible variational family by exponentially reweighting the Lévy measure using neural networks. This parametrization preserves the jump structure of the underlying process while remaining computationally tractable. To enable efficient inference, we develop a quadratic neural parametrization that yields closed-form normalization of the tilted measure, a conditional Gaussian representation for stable processes that facilitates simulation, and symmetry-aware Monte Carlo estimators for scalable optimization. Empirically, we demonstrate that the method accurately captures jump dynamics and yields reliable posterior inference in regimes where Gaussian-based variational approaches fail, on both synthetic and real-world datasets.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.