ArXiv TLDR

Causal Explanations from the Geometric Properties of ReLU Neural Networks

🐦 Tweet
2605.10396

Hector Woods, Philippa Ryan, Rob Alexander

cs.LGcs.NE

TLDR

This paper generates accurate causal explanations for ReLU neural networks by leveraging their geometric properties, improving interpretability.

Key contributions

  • Extracts causal explanations directly from the geometric properties of ReLU neural networks.
  • Utilizes the piecewise linear function and convex polytope regions inherent in ReLU networks.
  • Ensures explanations accurately reflect the original network's decision-making, unlike distilled models.

Why it matters

Interpreting black-box neural networks is crucial for safety-critical autonomous systems. This work provides a novel way to generate accurate causal explanations directly from the network's structure, avoiding the pitfalls of less reliable distilled models. This enhances trust and safety in AI decisions.

Original Abstract

Neural networks have proved an effective means of learning control policies for autonomous systems, but these learned policies are difficult to understand due to the black-box nature of neural networks. This lack of interpretability makes safety assurance for such autonomous systems challenging. The fields of eXplainable Artificial Intelligence (XAI) and eXplainable Reinforcement Learning (XRL) aim to interpret the decision making processes of neural networks and autonomous agents, respectively. In particular, work on causal explanations aims to provide "why" and "why not" explanations for why a model made a given decision. However, most of the work on explainability to date utilises a distilled version of the original model. While this distilled policy is interpretable, it necessarily degrades in performance significantly when compared to the original model, and is not guaranteed to be an accurate reflection of the decision making processes in the original model and as such cannot be used to guarantee its safety. Recent work on understanding the geometry of ReLU neural networks shows that a ReLU network corresponds to a piecewise linear function divided into regions defined by an n-dimensional convex polytope. Through this lens, a neural network can be understood as dividing the input space into distinct regions which apply a single linear function for each output neuron. We show that this geometric representation can be used to generate causal explanations for the network's behaviour similar to previous work, but which extracts rules directly from the geometry of Neural Networks with the ReLU activation function, and is therefore an accurate reflection of the network's behaviour.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.