ExAI5G: A Logic-Based Explainable AI Framework for Intrusion Detection in 5G Networks
Saeid Sheikhi, Panos Kostakos, Lauri Loven
TLDR
ExAI5G is an explainable AI framework for 5G intrusion detection, combining deep learning with logic-based XAI to provide high accuracy and transparent reasoning.
Key contributions
- Proposes ExAI5G, a framework integrating Transformer-based IDS with logic-based XAI for 5G intrusion detection.
- Uses Integrated Gradients for feature importance and extracts a surrogate decision tree for logical rules.
- Achieves 99.9% accuracy and 0.854 macro F1-score on a 5G IoT intrusion dataset.
- Extracts 16 logical rules with 99.7% fidelity, making the IDS reasoning transparent and actionable.
Why it matters
This paper addresses the critical need for transparent intrusion detection in complex 5G networks. By combining high-performance deep learning with explainable AI, ExAI5G builds trust without sacrificing accuracy. Its novel evaluation of LLM-generated explanations further ensures that security insights are both faithful and actionable for operators.
Original Abstract
Intrusion detection systems (IDSs) for 5G networks must handle complex, high-volume traffic. Although opaque "black-box" models can achieve high accuracy, their lack of transparency hinders trust and effective operational response. We propose ExAI5G, a framework that prioritizes interpretability by integrating a Transformer-based deep learning IDS with logic-based explainable AI (XAI) techniques. The framework uses Integrated Gradients to attribute feature importance and extracts a surrogate decision tree to derive logical rules. We introduce a novel evaluation methodology for LLM-generated explanations, using a powerful evaluator LLM to assess actionability and measuring their semantic similarity and faithfulness. On a 5G IoT intrusion dataset, our system achieves 99.9\% accuracy and a 0.854 macro F1-score, demonstrating strong performance. More importantly, we extract 16 logical rules with 99.7\% fidelity, making the model's reasoning transparent. The evaluation demonstrates that modern LLMs can generate explanations that are both faithful and actionable, indicating that it is possible to build a trustworthy and effective IDS without compromising performance for the sake of marginal gains from an opaque model.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.