ArXiv TLDR

Smoothing the Landscape: Causal Structure Learning via Diffusion Denoising Objectives

🐦 Tweet
2604.02250

Hao Zhu, Di Zhou, Donna Slonim

cs.LGstat.ML

TLDR

DDCD uses diffusion denoising objectives and an adaptive acyclicity constraint to learn causal structures, improving stability and scalability.

Key contributions

  • Proposes Denoising Diffusion Causal Discovery (DDCD) for learning causal structures.
  • Utilizes diffusion denoising score matching to smooth gradients for faster, more stable convergence.
  • Introduces an adaptive k-hop acyclicity constraint, improving runtime by avoiding matrix inversion.
  • Achieves competitive performance on synthetic data and demonstrates practical utility on real-world examples.

Why it matters

Understanding causal dependencies is crucial for decision-making, but existing methods struggle with high-dimensional data. DDCD offers a novel, more stable, and scalable approach to causal structure learning. This advancement could significantly improve the reliability of causal inference in complex datasets.

Original Abstract

Understanding causal dependencies in observational data is critical for informing decision-making. These relationships are often modeled as Bayesian Networks (BNs) and Directed Acyclic Graphs (DAGs). Existing methods, such as NOTEARS and DAG-GNN, often face issues with scalability and stability in high-dimensional data, especially when there is a feature-sample imbalance. Here, we show that the denoising score matching objective of diffusion models could smooth the gradients for faster, more stable convergence. We also propose an adaptive k-hop acyclicity constraint that improves runtime over existing solutions that require matrix inversion. We name this framework Denoising Diffusion Causal Discovery (DDCD). Unlike generative diffusion models, DDCD utilizes the reverse denoising process to infer a parameterized causal structure rather than to generate data. We demonstrate the competitive performance of DDCDs on synthetic benchmarking data. We also show that our methods are practically useful by conducting qualitative analyses on two real-world examples. Code is available at this url: https://github.com/haozhu233/ddcd.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.