QLAM: A Quantum Long-Attention Memory Approach to Long-Sequence Token Modeling
Hoang-Quan Nguyen, Sankalp Pandey, Khoa Luu
TLDR
QLAM introduces a quantum long-attention memory, extending state-space models to efficiently capture long-range dependencies using quantum superposition.
Key contributions
- Proposes QLAM, a hybrid quantum-classical memory for long-sequence token modeling.
- Represents the hidden state as a quantum state, leveraging superposition for global updates.
- Maintains linear-time complexity of state-space models while enriching memory representation.
- Outperforms recurrent and transformer models on sequential image classification benchmarks.
Why it matters
Transformers struggle with long sequences due to quadratic complexity, and classical state-space models lack complex global interaction. QLAM offers a novel quantum approach that maintains efficiency while significantly enhancing memory capabilities. This could pave the way for more powerful and scalable sequence models.
Original Abstract
Modeling long-range dependencies in sequential data remains a central challenge in machine learning. Transformers address this challenge through attention mechanisms, but their quadratic complexity with respect to sequence length limits scalability to long contexts. State-space models (SSMs) provide an efficient alternative with linear-time computation by evolving a latent state through recurrent updates, but their memory is typically formed via additive or linear transitions, which can limit their ability to capture complex global interactions across tokens. In this work, we introduce one of the first studies to leverage the superposition property of quantum systems to enhance state-based sequence modeling. In particular, we propose Quantum Long-Attention Memory (QLAM), a hybrid quantum-classical memory mechanism that can be viewed as a quantum extension of state-space models. Instead of maintaining a classical latent state updated through additive dynamics, QLAM represents the hidden state as a quantum state whose amplitudes encode a superposition of historical information. The state evolves through parameterized quantum circuits conditioned on the input, enabling a non-classical, globally update mechanism. In this way, QLAM preserves the recurrent and linear-time structure of SSMs while fundamentally enriching the memory representation through quantum superposition. Unlike attention mechanisms that explicitly compute pairwise interactions, QLAM implicitly captures global dependencies through the evolution of the quantum state, and retrieves task-relevant information via query-dependent measurements. We evaluate QLAM on sequential variants of standard image classification benchmarks, including sMNIST, sFashion-MNIST, and sCIFAR-10, where images are flattened into token sequences. Across all tasks, QLAM consistently improves over recurrent baselines and transformer-based models.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.