ArXiv TLDR

Reshoot-Anything: A Self-Supervised Model for In-the-Wild Video Reshooting

🐦 Tweet
2604.21776

Avinash Paliwal, Adithya Iyer, Shivin Yadav, Muhammad Ali Afridi, Midhun Harikumar

cs.CV

TLDR

Reshoot-Anything introduces a self-supervised model for in-the-wild video reshooting, generating multi-view data from monocular videos for robust camera control.

Key contributions

  • Introduces a self-supervised framework for in-the-wild video reshooting using monocular videos.
  • Generates pseudo multi-view training triplets by cropping and warping single input videos.
  • Learns 4D spatiotemporal structures by re-projecting textures across distinct times and viewpoints.
  • Achieves state-of-the-art temporal consistency and high-fidelity novel view synthesis.

Why it matters

This paper tackles the scarcity of multi-view data for video reshooting with a self-supervised framework using monocular videos. It enables robust camera control and high-fidelity novel view synthesis for dynamic scenes, significantly advancing content creation.

Original Abstract

Precise camera control for reshooting dynamic videos is bottlenecked by the severe scarcity of paired multi-view data for non-rigid scenes. We overcome this limitation with a highly scalable self-supervised framework capable of leveraging internet-scale monocular videos. Our core contribution is the generation of pseudo multi-view training triplets, consisting of a source video, a geometric anchor, and a target video. We achieve this by extracting distinct smooth random-walk crop trajectories from a single input video to serve as the source and target views. The anchor is synthetically generated by forward-warping the first frame of the source with a dense tracking field, which effectively simulates the distorted point-cloud inputs expected at inference. Because our independent cropping strategy introduces spatial misalignment and artificial occlusions, the model cannot simply copy information from the current source frame. Instead, it is forced to implicitly learn 4D spatiotemporal structures by actively routing and re-projecting missing high-fidelity textures across distinct times and viewpoints from the source video to reconstruct the target. At inference, our minimally adapted diffusion transformer utilizes a 4D point-cloud derived anchor to achieve state-of-the-art temporal consistency, robust camera control, and high-fidelity novel view synthesis on complex dynamic scenes.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.