ArXiv TLDR

On the Decompositionality of Neural Networks

🐦 Tweet
2604.07868

Junyong Lee, Baek-Ryun Seong, Sang-Ki Ko, Andrew Ferraiuolo, Minwoo Kang + 3 more

cs.LOcs.SE

TLDR

Introduces formal neural decompositionality, a boundary-aware framework (SAVED), and tests it, showing differences across model architectures.

Key contributions

  • Defines neural decompositionality as a semantic-preserving abstraction over architectures.
  • Characterizes decomposition by preserving semantic behavior along the model's decision boundary.
  • Develops SAVED, a framework using counterexample mining and pruning to operationalize decomposition.
  • Shows language Transformers largely preserve boundary semantics, while vision models often violate it.

Why it matters

Neural networks are often black boxes, limiting maintainability and verification. This paper provides a principled, formal definition and a practical framework to decompose them. This enables modular reasoning, improving maintainability, optimization, and systematic testing of complex AI models.

Original Abstract

Recent advances in deep neural networks have achieved state-of-the-art performance across vision and natural language processing tasks. In practice, however, most models are treated as monolithic black-box functions, limiting maintainability, component-wise optimization, and systematic testing and verification. Despite extensive work on pruning and empirical decomposition, the field still lacks a principled semantic notion of when a neural network can be meaningfully decomposed. We introduce neural decompositionality, a formal notion defined as a semantic-preserving abstraction over neural architectures. Our key insight is that decompositionality should be characterized by the preservation of semantic behavior along the model's decision boundary, which governs classification outcomes. This yields a semantic contract between the original model and its components, enabling a rigorous formulation of decomposition. Building on this foundation, we develop a boundary-aware framework, SAVED (Semantic-Aware Verification-Driven Decomposition), which operationalizes the proposed definition. SAVED combines counterexample mining over low logic-margin inputs, probabilistic coverage, and structure-aware pruning to construct decompositions that preserve decision-boundary semantics. We evaluate our approach on CNNs, language Transformers, and Vision Transformers. Results show clear architectural differences: language Transformers largely preserve boundary semantics under decomposition, whereas vision models frequently violate the decompositionality criterion, indicating intrinsic limits. Overall, our work establishes decompositionality as a formally definable and empirically testable property, providing a foundation for modular reasoning about neural networks.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.