Hijacking Large Audio-Language Models via Context-Agnostic and Imperceptible Auditory Prompt Injection
Meng Chen, Kun Wang, Li Lu, Jiaheng Zhang, Tianwei Zhang
TLDR
AudioHijack injects imperceptible audio prompts to hijack large audio-language models, forcing unauthorized actions and exposing critical vulnerabilities.
Key contributions
- Reveals "auditory prompt injection," a new threat allowing imperceptible audio to hijack large audio-language models.
- Introduces `AudioHijack`, a framework that generates context-agnostic, imperceptible adversarial audio to control LALMs.
- Achieves 79-96% hijacking success on 13 SOTA LALMs across diverse contexts, maintaining high acoustic fidelity.
- Demonstrates real-world hijacking of commercial voice agents (e.g., Mistral AI, Microsoft Azure) for unauthorized actions.
Why it matters
This work exposes a critical, overlooked vulnerability in large audio-language models (LALMs), showing how imperceptible audio can force unauthorized actions. It highlights the urgent need for dedicated defenses against sophisticated auditory prompt injection attacks, impacting the security of intelligent voice interactions.
Original Abstract
Modern Large audio-language models (LALMs) power intelligent voice interactions by tightly integrating audio and text. This integration, however, expands the attack surface beyond text and introduces vulnerabilities in the continuous, high-dimensional audio channel. While prior work studied audio jailbreaks, the security risks of malicious audio injection and downstream behavior manipulation remain underexamined. In this work, we reveal a previously overlooked threat, auditory prompt injection, under realistic constraints of audio data-only access and strong perceptual stealth. To systematically analyze this threat, we propose \textit{AudioHijack}, a general framework that generates context-agnostic and imperceptible adversarial audio to hijack LALMs. \textit{AudioHijack} employs sampling-based gradient estimation for end-to-end optimization across diverse models, bypassing non-differentiable audio tokenization. Through attention supervision and multi-context training, it steers model attention toward adversarial audio and generalizes to unseen user contexts. We also design a convolutional blending method that modulates perturbations into natural reverberation, making them highly imperceptible to users. Extensive experiments on 13 state-of-the-art LALMs show consistent hijacking across 6 misbehavior categories, achieving average success rates of 79\%-96\% on unseen user contexts with high acoustic fidelity. Real-world studies demonstrate that commercial voice agents from Mistral AI and Microsoft Azure can be induced to execute unauthorized actions on behalf of users. These findings expose critical vulnerabilities in LALMs and highlight the urgent need for dedicated defense.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.