ArXiv TLDR

ScriptHOI: Learning Scripted State Transitions for Open-Vocabulary Human-Object Interaction Detection

🐦 Tweet
2605.05057

Minh Anh Nguyen, Quang Huy Tran, Bao Ngoc Le, SuiYang Guang, Tuan Kiet Pham + 1 more

cs.CV

TLDR

ScriptHOI improves open-vocabulary human-object interaction detection by modeling interactions as scripted state transitions, reducing affordance-based false positives.

Key contributions

  • Represents HOI phrases as soft scripted state transitions, decomposed into multiple visual state slots.
  • Uses visual state tokenizer and slot-wise matcher to estimate script coverage and conflict for calibration.
  • Introduces interval partial-label learning for unannotated interactions, avoiding closed-world negatives.
  • Employs a counterfactual script contrast loss to prevent object-only prediction shortcuts.

Why it matters

Current HOI detectors often make incorrect predictions based on object affordance alone. ScriptHOI addresses this by verifying visual evidence across multiple interaction aspects. This leads to more accurate and robust detection of rare and unseen human-object interactions.

Original Abstract

Open-vocabulary human-object interaction (HOI) detection requires recognizing interaction phrases that may not appear as annotated categories during training. Recent vision-language HOI detectors improve semantic transfer by matching human-object features with text embeddings, but their predictions are often dominated by object affordance and phrase-level co-occurrence. As a result, a model may predict \textit{cut cake} from the presence of a knife and a cake without verifying whether the hand, tool, target, contact pattern, and object state jointly support the action. We propose \textbf{ScriptHOI}, a structured framework that represents each interaction phrase as a soft scripted state transition. Rather than treating a phrase as a single class token, ScriptHOI decomposes it into body-role, contact, geometry, affordance, motion, and object-state slots. A visual state tokenizer parses each detected human-object pair into corresponding state tokens, and a slot-wise matcher estimates both script coverage and script conflict. These two quantities calibrate HOI logits, expose missing visual evidence, and provide training constraints for incomplete annotations. To avoid suppressing valid but unannotated interactions, we further introduce interval partial-label learning, which constrains unannotated candidates with script-derived lower and upper probability bounds instead of assigning closed-world negatives. A counterfactual script contrast loss swaps individual script slots to discourage object-only shortcuts. Experiments on HICO-DET, V-COCO, and open-vocabulary HOI splits show that ScriptHOI improves rare and unseen interaction recognition while substantially reducing affordance-conflict false positives.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.