ArXiv TLDR

Robustness Analysis of Machine Learning Models for IoT Intrusion Detection Under Data Poisoning Attacks

🐦 Tweet
2604.14444

Fortunatus Aabangbio Wulnye, Justice Owusu Agyemang, Kwame Opuni-Boachie Obour Agyekum, Kwame Agyeman-Prempeh Agyekum, Kingsford Sarkodie Obeng Kwakye + 1 more

cs.CRcs.AI

TLDR

This paper analyzes the robustness of ML models for IoT intrusion detection against data poisoning attacks, finding significant degradation in some models.

Key contributions

  • Evaluated four common ML classifiers (RF, GBM, LR, DNN) against multiple data poisoning attacks.
  • Found Logistic Regression and Deep Neural Networks degrade up to 40% in detection fidelity.
  • Demonstrated ensemble models (Random Forest, Gradient Boosting) exhibit comparatively stable performance.
  • Emphasizes the need for adversarially robust training and continuous anomaly monitoring in IoT NIDS.

Why it matters

This paper is crucial as it empirically demonstrates the severe impact of data poisoning on IoT intrusion detection systems, particularly for Logistic Regression and Deep Neural Networks. It highlights the urgent need for robust training and continuous monitoring, informing future research on adaptive, attack-aware models for reliable IoT security.

Original Abstract

Ensuring the reliability of machine learning-based intrusion detection systems remains a critical challenge in Internet of Things (IoT) environments, particularly as data poisoning attacks increasingly threaten the integrity of model training pipelines. This study evaluates the susceptibility of four widely used classifiers, Random Forest, Gradient Boosting Machine, Logistic Regression, and Deep Neural Network models, against multiple poisoning strategies using three real-world IoT datasets. Results show that while ensemble-based models exhibit comparatively stable performance, Logistic Regression and Deep Neural Networks suffer degradation of up to 40% under label manipulation and outlier-based attacks. Such disruptions significantly distort decision boundaries, reduce detection fidelity, and undermine deployment readiness. The findings highlight the need for adversarially robust training, continuous anomaly monitoring, and feature-level validation within operational Network Intrusion Detection Systems. The study also emphasizes the importance of integrating resilience testing into regulatory and compliance frameworks for AI-driven IoT security. Overall, this work provides an empirical foundation for developing more resilient intrusion detection pipelines and informs future research on adaptive, attack-aware models capable of maintaining reliability under adversarial IoT conditions.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.