HDR Video Generation via Latent Alignment with Logarithmic Encoding
Naomi Ken Korem, Mohamed Oumoumad, Harel Cain, Matan Ben Yosef, Urska Jelercic + 4 more
TLDR
This paper enables high-quality HDR video generation by aligning logarithmically encoded HDR data with pretrained generative models' latent spaces.
Key contributions
- Achieves HDR generation by leveraging strong visual priors from pretrained generative models.
- Uses logarithmic encoding to align HDR imagery with the latent space of existing models.
- Enables direct adaptation via lightweight fine-tuning, avoiding complex encoder retraining.
- Introduces camera-mimicking degradations to infer missing HDR content from learned priors.
Why it matters
HDR generation is simplified by aligning its representation with existing generative models. This approach avoids complex model redesign or extensive data, making high-quality HDR video accessible with minimal adaptation. It shows that effective HDR handling doesn't require new model architectures.
Original Abstract
High dynamic range (HDR) imagery offers a rich and faithful representation of scene radiance, but remains challenging for generative models due to its mismatch with the bounded, perceptually compressed data on which these models are trained. A natural solution is to learn new representations for HDR, which introduces additional complexity and data requirements. In this work, we show that HDR generation can be achieved in a much simpler way by leveraging the strong visual priors already captured by pretrained generative models. We observe that a logarithmic encoding widely used in cinematic pipelines maps HDR imagery into a distribution that is naturally aligned with the latent space of these models, enabling direct adaptation via lightweight fine-tuning without retraining an encoder. To recover details that are not directly observable in the input, we further introduce a training strategy based on camera-mimicking degradations that encourages the model to infer missing high dynamic range content from its learned priors. Combining these insights, we demonstrate high-quality HDR video generation using a pretrained video model with minimal adaptation, achieving strong results across diverse scenes and challenging lighting conditions. Our results indicate that HDR, despite representing a fundamentally different image formation regime, can be handled effectively without redesigning generative models, provided that the representation is chosen to align with their learned priors.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.