Are GUI Agents Focused Enough? Automated Distraction via Semantic-level UI Element Injection
Wenkui Yang, Chao Jin, Haisu Zhu, Weilin Luo, Derek Yuen + 5 more
TLDR
This paper introduces Semantic-level UI Element Injection to red-team GUI agents by overlaying harmless UI elements, revealing model-agnostic vulnerabilities.
Key contributions
- Introduces "Semantic-level UI Element Injection" to misdirect GUI agents via overlaid, harmless UI elements.
- Employs a modular Editor-Overlapper-Victim pipeline and iterative search, boosting attack success by up to 4.4x.
- Demonstrates model-agnostic vulnerabilities, as optimized attacks transfer effectively across different models.
- Shows injected elements act as persistent attractors, not just visual clutter, after an initial successful attack.
Why it matters
This work addresses limitations in current GUI agent red-teaming by proposing a practical, black-box threat model. It reveals significant, persistent, and model-agnostic vulnerabilities in GUI agents, highlighting a critical area for improving their robustness and safety alignment.
Original Abstract
Existing red-teaming studies on GUI agents have important limitations. Adversarial perturbations typically require white-box access, which is unavailable for commercial systems, while prompt injection is increasingly mitigated by stronger safety alignment. To study robustness under a more practical threat model, we propose Semantic-level UI Element Injection, a red-teaming setting that overlays safety-aligned and harmless UI elements onto screenshots to misdirect the agent's visual grounding. Our method uses a modular Editor-Overlapper-Victim pipeline and an iterative search procedure that samples multiple candidate edits, keeps the best cumulative overlay, and adapts future prompt strategies based on previous failures. Across five victim models, our optimized attacks improve attack success rate by up to 4.4x over random injection on the strongest victims. Moreover, elements optimized on one source model transfer effectively to other target models, indicating model-agnostic vulnerabilities. After the first successful attack, the victim still clicks the attacker-controlled element in more than 15% of later independent trials, versus below 1% for random injection, showing that the injected element acts as a persistent attractor rather than simple visual clutter.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.