ArXiv TLDR

MoRight: Motion Control Done Right

🐦 Tweet
2604.07348

Shaowei Liu, Xuanchi Ren, Tianchang Shen, Huan Ling, Saurabh Gupta + 3 more

cs.CVcs.AIcs.GRcs.LGcs.RO

TLDR

MoRight is a unified framework for motion-controlled video generation, offering disentangled object/camera control and learning motion causality.

Key contributions

  • Enables disentangled control of object motion and camera viewpoint for video generation.
  • Learns motion causality by decomposing motion into active (user-driven) and passive components.
  • Supports both forward (predict consequences) and inverse (recover actions) motion reasoning.
  • Achieves state-of-the-art performance in generation quality and motion controllability.

Why it matters

Existing methods struggle with disentangled control and motion causality in video generation. MoRight addresses these limitations, enabling more realistic and controllable scene dynamics. This advances the field by allowing users to drive actions with coherent reactions and adjust viewpoints freely.

Original Abstract

Generating motion-controlled videos--where user-specified actions drive physically plausible scene dynamics under freely chosen viewpoints--demands two capabilities: (1) disentangled motion control, allowing users to separately control the object motion and adjust camera viewpoint; and (2) motion causality, ensuring that user-driven actions trigger coherent reactions from other objects rather than merely displacing pixels. Existing methods fall short on both fronts: they entangle camera and object motion into a single tracking signal and treat motion as kinematic displacement without modeling causal relationships between object motion. We introduce MoRight, a unified framework that addresses both limitations through disentangled motion modeling. Object motion is specified in a canonical static-view and transferred to an arbitrary target camera viewpoint via temporal cross-view attention, enabling disentangled camera and object control. We further decompose motion into active (user-driven) and passive (consequence) components, training the model to learn motion causality from data. At inference, users can either supply active motion and MoRight predicts consequences (forward reasoning), or specify desired passive outcomes and MoRight recovers plausible driving actions (inverse reasoning), all while freely adjusting the camera viewpoint. Experiments on three benchmarks demonstrate state-of-the-art performance in generation quality, motion controllability, and interaction awareness.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.