Dancing under the stars: video denoising in starlight CVPR 2022
- Kristina Monakhova UC Berkeley
- Stephan Richter Intel Labs
- Laura Waller UC Berkeley
- Vladlen Koltun Intel Labs
Abstract
Imaging in low light is extremely challenging due to low photon counts. Using sensitive CMOS cameras, it is currently possible to take videos at night under moonlight (0.05-0.3 lx illumination). In this paper, we demonstrate photorealistic video under starlight (no moon present, <0.001 lx) for the first time. To enable this, we develop a GAN-tuned physics-based noise model to more accurately represent camera noise at the lowest light levels. Using this noise model, we train a video denoiser using a combination of simulated noisy video clips and real noisy still images. We present a 5-10fps video dataset with significant motion taken between 0.6-0.7 mlx with no active illumination. Comparing against alternative methods, we achieve improved video quality at the lowest light levels, demonstrating photorealistic video denoising in starlight for the first time.
Denoised Submillilux videos
Here we show our denoised 5-10fps videos taken at submillilux light levels with no external illumination.
Comparison against other methods
Our method produces good temporal consistency and minimal artifacts at the lowest light levels.
Dancing under the stars Dataset
We provide a dataset of submillilux images as well as a dataset of calibration images (paired). All images are available either in RAW format (.DNG) or as a preloaded .mat file.
Submillilux videos
We provide 42 unpaired raw noisy video clips taken at 5-10fps. These video clips vary in length, totalling over 35 minutes of content. All videos were taken on a clear, moonless night with no external illumination and each clip contains significant motion (e.g. dancing, volleyball, flags waving, etc.), serving as a challenging test for video denoising algorithms.
- Submillilux Dataset (92 GB): [link]
Paired calibration images
We provide several bursts of paired images for the purpose of noise model training.
Unpaired clean RGB+NIR videos
Since we use a RGB+NIR camera instead of an RGB cameras, we also provide a dataset of clean (noiseless) videos from our camera.
- Unpaired clean dataset: [link]
Citation
Supplemental materials for the paper can be found
here.
The website template was borrowed from Ben Mildenhall.