ArXiv TLDR

Relit-LiVE: Relight Video by Jointly Learning Environment Video

🐦 Tweet
2605.06658

Weiqing Xiao, Hong Li, Xiuyu Yang, Houyuan Chen, Wenyi Li + 5 more

cs.CV

TLDR

Relit-LiVE relights videos consistently and stably without camera pose, by using raw images and jointly predicting environment videos.

Key contributions

  • Avoids unreliable intrinsic decomposition by using raw reference images for robust scene cues.
  • Jointly predicts relit videos and per-frame environment maps in a single diffusion process.
  • Achieves physically consistent, temporally stable video relighting without needing camera pose.
  • Supports dynamic lighting, camera motion, and applications like material editing and object insertion.

Why it matters

Existing video relighting methods struggle with intrinsic decomposition, leading to artifacts. Relit-LiVE overcomes this by using raw images and joint environment prediction, delivering consistent, stable results. This advances video editing and rendering applications significantly.

Original Abstract

Recent advances have shown that large-scale video diffusion models can be repurposed as neural renderers by first decomposing videos into intrinsic scene representations and then performing forward rendering under novel illumination. While promising, this paradigm fundamentally relies on accurate intrinsic decomposition, which remains highly unreliable for real-world videos and often leads to distorted appearances, broken materials, and accumulated temporal artifacts during relighting. In this work, we present Relit-LiVE, a novel video relighting framework that produces physically consistent, temporally stable results without requiring prior knowledge of camera pose. Our key insight is to explicitly introduce raw reference images into the rendering process, enabling the model to recover critical scene cues that are inevitably lost or corrupted in intrinsic representations. Furthermore, we propose a novel environment video prediction formulation that simultaneously generates relit videos and per-frame environment maps aligned with each camera viewpoint in a single diffusion process. This joint prediction enforces strong geometric-illumination alignment and naturally supports dynamic lighting and camera motion, significantly improving physical consistency in video relighting while easing the requirement of known per-frame camera pose. Extensive experiments demonstrate that Relit-LiVE consistently outperforms state-of-the-art video relighting and neural rendering methods across synthetic and real-world benchmarks. Beyond relighting, our framework naturally supports a wide range of downstream applications, including scene-level rendering, material editing, object insertion, and streaming video relighting. The Project is available at https://github.com/zhuxing0/Relit-LiVE.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.