Saturation-Aware Space-Variant Blind Image Deblurring
Muhammad Z. Alam, Larry Stetsiuk, Arooba Zeshan
TLDR
A novel framework for blind image deblurring addresses saturated pixels in HDR/low-light conditions, improving results without artifacts.
Key contributions
- Addresses saturated pixels in HDR and low-light blind image deblurring.
- Segments images based on blur intensity and proximity to saturation.
- Leverages a pre-estimated Light Spread Function to mitigate stray light.
- Estimates true radiance of saturated regions using the dark channel prior.
Why it matters
This framework significantly improves blind image deblurring, especially in challenging HDR and low-light scenarios where saturation is common. It outperforms existing methods by preventing artifacts, making it a valuable advancement for image restoration.
Original Abstract
This paper presents a novel saturation aware space variant blind image deblurring framework designed to address challenges posed by saturated pixels in deblurring under high dynamic range and low light conditions. The proposed approach effectively segments the image based on blur intensity and proximity to saturation, leveraging a pre estimated Light Spread Function to mitigate stray light effects. By accurately estimating the true radiance of saturated regions using the dark channel prior, our method enhances the deblurring process without introducing artifacts like ringing. Experimental evaluations on both synthetic and real world datasets demonstrate that the framework improves deblurring outcomes across various scenarios showcasing superior performance compared to state of the art saturation-aware and general purpose methods. This adaptability highlights the framework potential integration with existing and emerging blind image deblurring techniques.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.