OmniLight: One Model to Rule All Lighting Conditions
Youngjin Oh, Junyoung Park, Junhyeong Kwon, Nam Ik Cho
TLDR
OmniLight, a generalized model with WD-MoE, effectively restores images under diverse adverse lighting conditions, outperforming specialized baselines.
Key contributions
- Introduces OmniLight, a generalized model using Wavelet Domain Mixture-of-Experts (WD-MoE) for robust image restoration.
- Explores two strategies: specialized (DINOLight) and generalized (OmniLight) for diverse lighting conditions.
- Secured top-tier rankings in all three lighting-related tracks of the NTIRE 2026 Challenge.
Why it matters
Adverse lighting significantly impacts computer vision systems. This paper introduces OmniLight, a generalized model that robustly restores images across diverse lighting conditions. Its top performance in NTIRE 2026 highlights its potential for real-world applications needing reliable image quality.
Original Abstract
Adverse lighting conditions, such as cast shadows and irregular illumination, pose significant challenges to computer vision systems by degrading visibility and color fidelity. Consequently, effective shadow removal and ALN are critical for restoring underlying image content, improving perceptual quality, and facilitating robust performance in downstream tasks. However, while achieving state-of-the-art results on specific benchmarks is a primary goal in image restoration challenges, real-world applications often demand robust models capable of handling diverse domains. To address this, we present a comprehensive study on lighting-related image restoration by exploring two contrasting strategies. We leverage a robust framework for ALN, DINOLight, as a specialized baseline to exploit the characteristics of each individual dataset, and extend it to OmniLight, a generalized alternative incorporating our proposed Wavelet Domain Mixture-of-Experts (WD-MoE) that is trained across all provided datasets. Through a comparative analysis of these two methods, we discuss the impact of data distribution on the performance of specialized and unified architectures in lighting-related image restoration. Notably, both approaches secured top-tier rankings across all three lighting-related tracks in the NTIRE 2026 Challenge, demonstrating their outstanding perceptual quality and generalization capabilities. Our codes are available at https://github.com/OBAKSA/Lighting-Restoration.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.