ArXiv TLDR

AnomalyGen: Enhancing Log-Based Anomaly Detection with Code-Guided Data Augmentation

🐦 Tweet
2604.11107

Xinyu Li, Yintong Huo, Chenxi Mao, Shiwen Shan, Yuxin Su + 2 more

cs.SE

TLDR

AnomalyGen enhances log-based anomaly detection by augmenting training data with code-guided synthesis, significantly reducing false alarms.

Key contributions

  • Combats log anomaly detection data sparsity causing frequent false alarms.
  • Synthesizes labeled log sequences from source code for data augmentation.
  • Uses Log-Oriented Control Flow Graphs (LCFGs) to enumerate valid execution paths.
  • Applies LLM Chain-of-Thought for logical consistency and realistic parameter generation.

Why it matters

Log-based anomaly detection is often hindered by insufficient training data, leading to high false alarm rates. AnomalyGen offers a novel solution by programmatically generating realistic, labeled log data. This approach substantially boosts the accuracy and reliability of various anomaly detection models.

Original Abstract

Log-based anomaly detection is fundamentally constrained by training data sparsity. Our empirical study reveals that public benchmark datasets cover less than 10% of source code log templates. Consequently, models frequently misclassify unseen but valid execution paths as anomalies, leading to false alarms. To address this, we propose AnomalyGen, a novel framework that augments training data by synthesizing labeled log sequences from source code. AnomalyGen combines log-oriented static analysis with Large Language Model (LLM) reasoning in three stages: (1) building Log-Oriented Control Flow Graphs (LCFGs) to enumerate structurally valid execution paths; (2) applying LLM Chain-of-Thought (CoT) reasoning to verify logical consistency and generate realistic runtime parameters (e.g., block IDs, IP addresses); and (3) labeling generated sequences with domain heuristics. Evaluations on HDFS and Zookeeper across 12 diverse anomaly detection models show AnomalyGen consistently improves performance. Deep learning models achieved average F1-score gains of 2.18% (HDFS) and 1.69% (Zookeeper), with an unsupervised Transformer on HDFS jumping from 0.818 to 0.970. Ablation results show that both static analysis and LLM-based verification are necessary: removing them reduces F1 by up to 8.7 and 10.7 percentage points, respectively. Our framework and datasets are publicly available to facilitate future research.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.