ActCam: Zero-Shot Joint Camera and 3D Motion Control for Video Generation
Omar El Khalifi, Thomas Rossi, Oscar Fossey, Thibault Fouque, Ulysse Mizrahi + 4 more
TLDR
ActCam enables zero-shot joint 3D motion and camera control for video generation, improving fidelity and camera adherence with staged guidance.
Key contributions
- Enables zero-shot joint character motion and camera trajectory control for video generation.
- Utilizes pretrained image-to-video diffusion models with depth and pose conditioning.
- Generates geometrically consistent pose and depth conditions from source video and target camera.
- Introduces a two-phase conditioning schedule for robust scene structure and detail refinement.
Why it matters
Video generation often struggles with precise control over both actor motion and camera. ActCam offers a zero-shot solution for joint 3D motion and camera control, enabling artists to create more dynamic and controlled videos without retraining models. This advances artistic applications by improving fidelity and viewpoint consistency.
Original Abstract
For artistic applications, video generation requires fine-grained control over both performance and cinematography, i.e., the actor's motion and the camera trajectory. We present ActCam, a zero-shot method for video generation that jointly transfers character motion from a driving video into a new scene and enables per-frame control of intrinsic and extrinsic camera parameters. ActCam builds on any pretrained image-to-video diffusion model that accepts conditioning in terms of scene depth and character pose. Given a source video with a moving character and a target camera motion, ActCam generates pose and depth conditions that remain geometrically consistent across frames. We then run a single sampling process with a two-phase conditioning schedule: early denoising steps condition on both pose and sparse depth to enforce scene structure, after which depth is dropped and pose-only guidance refines high-frequency details without over-constraining the generation. We evaluate ActCam on multiple benchmarks spanning diverse character motions and challenging viewpoint changes. We find that, compared to pose-only control and other pose and camera methods, ActCam improves camera adherence and motion fidelity, and is preferred in human evaluations, especially under large viewpoint changes. Our results highlight that careful camera-consistent conditioning and staged guidance can enable strong joint camera and motion control without training. Project page: https://elkhomar.github.io/actcam/.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.