ArXiv TLDR

Leave My Images Alone: Preventing Multi-Modal Large Language Models from Analyzing Images via Visual Prompt Injection

🐦 Tweet
2604.09024

Zedian Shao, Hongbin Liu, Yuepeng Hu, Neil Zhenqiang Gong

cs.CVcs.AIcs.CRcs.LG

TLDR

ImageProtector uses visual prompt injection to prevent MLLMs from analyzing sensitive images, making them refuse requests.

Key contributions

  • Introduces ImageProtector, a user-side method for image privacy protection.
  • Embeds nearly imperceptible perturbations to trigger MLLM refusal responses.
  • Demonstrated effectiveness across six MLLMs and four datasets.
  • Evaluated countermeasures partially mitigate but degrade model performance.

Why it matters

MLLMs pose privacy risks by extracting sensitive data from images at scale. This paper offers a proactive user-side defense, crucial for protecting personal information in the age of widespread MLLM use. It highlights the potential and limits of perturbation-based privacy.

Original Abstract

Multi-modal large language models (MLLMs) have emerged as powerful tools for analyzing Internet-scale image data, offering significant benefits but also raising critical safety and societal concerns. In particular, open-weight MLLMs may be misused to extract sensitive information from personal images at scale, such as identities, locations, or other private details. In this work, we propose ImageProtector, a user-side method that proactively protects images before sharing by embedding a carefully crafted, nearly imperceptible perturbation that acts as a visual prompt injection attack on MLLMs. As a result, when an adversary analyzes a protected image with an MLLM, the MLLM is consistently induced to generate a refusal response such as "I'm sorry, I can't help with that request." We empirically demonstrate the effectiveness of ImageProtector across six MLLMs and four datasets. Additionally, we evaluate three potential countermeasures, Gaussian noise, DiffPure, and adversarial training, and show that while they partially mitigate the impact of ImageProtector, they simultaneously degrade model accuracy and/or efficiency. Our study focuses on the practically important setting of open-weight MLLMs and large-scale automated image analysis, and highlights both the promise and the limitations of perturbation-based privacy protection.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.