Collaborative Multi-Agent Scripts Generation for Enhancing Imperfect-Information Reasoning in Murder Mystery Games
Keyang Zhong, Junlin Xie, Hefeng Wu, Haofeng Li, Guanbin Li
TLDR
This paper introduces a collaborative multi-agent framework and a two-stage training strategy to enhance VLM reasoning in complex, deceptive multiplayer games.
Key contributions
- Proposes a collaborative multi-agent framework for generating role-driven multiplayer game scripts.
- Synthesizes rich multimodal contexts including backstories, clues, and multi-hop reasoning chains.
- Introduces a two-stage agent-monitored training: CoT fine-tuning and GRPO-based RL with reward shaping.
- Significantly improves VLM performance in narrative reasoning, hidden fact extraction, and deception-resilience.
Why it matters
VLMs struggle with complex reasoning in social, deceptive environments like multiplayer games. This paper offers a novel multi-agent framework to generate high-quality game scripts and a two-stage training strategy. It significantly enhances VLM's ability to reason under uncertainty and deception, paving the way for more robust AI in complex social interactions.
Original Abstract
Vision-language models (VLMs) have shown impressive capabilities in perceptual tasks, yet they degrade in complex multi-hop reasoning under multiplayer game settings with imperfect and deceptive information. In this paper, we study a representative multiplayer task, Murder Mystery Games, which require inferring hidden truths based on partial clues provided by roles with different intentions. To address this challenge, we propose a collaborative multi-agent framework for evaluating and synthesizing high-quality, role-driven multiplayer game scripts, enabling fine-grained interaction patterns tailored to character identities (i.e., murderer vs. innocent). Our system generates rich multimodal contexts, including character backstories, visual and textual clues, and multi-hop reasoning chains, through coordinated agent interactions. We design a two-stage agent-monitored training strategy to enhance the reasoning ability of VLMs: (1) chain-of-thought based fine-tuning on curated and synthetic datasets that model uncertainty and deception; (2) GRPO-based reinforcement learning with agent-monitored reward shaping, encouraging the model to develop character-specific reasoning behaviors and effective multimodal multi-hop inference. Extensive experiments demonstrate that our method significantly boosts the performance of VLMs in narrative reasoning, hidden fact extraction, and deception-resilient understanding. Our contributions offer a scalable solution for training and evaluating VLMs under uncertain, adversarial, and socially complex conditions, laying the groundwork for future benchmarks in multimodal multi-hop reasoning under imperfect information.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.