LLM as Clinical Graph Structure Refiner: Enhancing Representation Learning in EEG Seizure Diagnosis
Lincan Li, Zheng Chen, Yushun Dong
TLDR
LLMs refine noisy EEG graph structures by removing redundant edges, boosting seizure diagnosis accuracy and interpretability.
Key contributions
- Proposes using LLMs to refine noisy EEG graph structures for improved seizure diagnosis.
- Introduces a two-stage framework: initial graph by Transformer, then LLM refines edges.
- LLM acts as an edge set refiner, using textual and statistical features to validate connections.
- Achieves enhanced seizure detection accuracy and more interpretable EEG graph representations.
Why it matters
EEG seizure diagnosis is critical but challenging due to noisy data. This work offers a novel approach by leveraging LLMs to clean up graph representations, leading to more accurate and reliable diagnostic tools. It addresses a key limitation in current methods.
Original Abstract
Electroencephalogram (EEG) signals are vital for automated seizure detection, but their inherent noise makes robust representation learning challenging. Existing graph construction methods, whether correlation-based or learning-based, often generate redundant or irrelevant edges due to the noisy nature of EEG data. This significantly impairs the quality of graph representation and limits downstream task performance. Motivated by the remarkable reasoning and contextual understanding capabilities of large language models (LLMs), we explore the idea of using LLMs as graph edge refiners. Specifically, we propose a two-stage framework: we first verify that LLM-based edge refinement can effectively identify and remove redundant connections, leading to significant improvements in seizure detection accuracy and more meaningful graph structures. Building on this insight, we further develop a robust solution where the initial graph is constructed using a Transformer-based edge predictor and multilayer perceptron, assigning probability scores to potential edges and applying a threshold to determine their existence. The LLM then acts as an edge set refiner, making informed decisions based on both textual and statistical features of node pairs to validate the remaining connections. Extensive experiments on TUSZ dataset demonstrate that our LLM-refined graph learning framework not only enhances task performance but also yields cleaner and more interpretable graph representations.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.