ArXiv TLDR

A Comparative Study of Semantic Log Representations for Software Log-based Anomaly Detection

🐦 Tweet
2604.08028

Yuqing Wang, Ying Song, Xiaozhou Li, Nana Reinikainen, Mika V. Mäntylä

cs.SE

TLDR

This paper benchmarks semantic log representations for anomaly detection and proposes QTyBERT, balancing effectiveness and efficiency.

Key contributions

  • Benchmarked static (Word2Vec, GloVe, FastText) and BERT-based log representations.
  • Identified an effectiveness-efficiency trade-off in log embedding generation.
  • Proposed QTyBERT, a novel method balancing effectiveness and efficiency for log anomaly detection.
  • QTyBERT uses SysBE (quantized BERT) and CroSysEh for efficient, expressive embeddings.

Why it matters

This paper addresses the practical challenge of using effective but slow BERT-based semantic log representations for anomaly detection. It proposes QTyBERT, making advanced semantic representations more efficient and practical for real-world CPU deployments. This improves the feasibility of deep learning in log analysis.

Original Abstract

Recent deep learning (DL) methods for log anomaly detection increasingly rely on semantic log representation methods that convert the textual content of log events into vector embeddings as input to DL models. However, these DL methods are typically evaluated as end-to-end pipelines, while the impact of different semantic representation methods is not well understood. In this paper, we benchmark widely used semantic log representation methods, including static word embedding methods (Word2Vec, GloVe, and FastText) and the BERT-based contextual embedding method, across diverse DL models for log-event level anomaly detection on three publicly available log datasets: BGL, Thunderbird, and Spirit. We identify an effectiveness--efficiency trade off under CPU deployment settings: the BERT-based method is more effective, but incurs substantially longer log embedding generation time, limiting its practicality; static word embedding methods are efficient but are generally less effective and may yield insufficient detection performance. Motivated by this finding, we propose QTyBERT, a novel semantic log representation method that better balances this trade-off. QTyBERT uses SysBE, a lightweight BERT variant with system-specific quantization, to efficiently encode log events into vector embeddings on CPUs, and leverages CroSysEh to enhance the semantic expressiveness of these log embeddings. CroSysEh is trained unsupervisedly using unlabeled logs from multiple systems to capture the underlying semantic structure of the BERT model's embedding space. We evaluate QTyBERT against existing semantic log representation methods. Our results show that, for the DL models, using QTyBERT-generated log embeddings achieves detection effectiveness comparable to or better than BERT-generated log embeddings, while bringing log embedding generation time closer to that of static word embedding methods.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.