Hypergraph Neural Networks Accelerate MUS Enumeration
TLDR
This paper introduces a domain-agnostic Hypergraph Neural Network (HGNN) method, trained with reinforcement learning, to accelerate MUS enumeration.
Key contributions
- Proposes a domain-agnostic method for MUS enumeration using Hypergraph Neural Networks (HGNNs).
- Incrementally builds a hypergraph with constraints as vertices and previously found MUSes as hyperedges.
- Trains an HGNN-based agent via reinforcement learning to minimize satisfiability checks for new MUSes.
- Achieves faster MUS enumeration, finding more MUSes within the same satisfiability check budget.
Why it matters
MUS enumeration is crucial but computationally intensive. This work offers a novel, domain-agnostic ML approach that significantly reduces the cost of satisfiability checks. It broadens the applicability of ML for complex constraint satisfaction problems beyond Boolean SAT.
Original Abstract
Enumerating Minimal Unsatisfiable Subsets (MUSes) is a fundamental task in constraint satisfaction problems (CSPs). Its major challenge is the exponential growth of the search space, which becomes particularly severe when satisfiability checks are expensive. Recent machine learning approaches reduce this cost for Boolean satisfiability problems but rely on explicit variable-constraint relationships, limiting their application domains. This paper proposes a domain-agnostic method to accelerate MUS enumeration using Hypergraph Neural Networks (HGNNs). The proposed method incrementally builds a hypergraph with constraints as vertices and MUSes enumerated until the current step as hyperedges, and employs an HGNN-based agent trained via reinforcement learning to minimize the number of satisfiability checks required to obtain an MUS. Experimental results demonstrate the effectiveness of our approach in accelerating MUS enumeration, showing that our method can enumerate more MUSes within the same satisfiability check budget compared to conventional methods.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.