One Size Fits All: Can We Train One Denoiser for All Noise Levels?

Part of Proceedings of the International Conference on Machine Learning 1 pre-proceedings (ICML 2020)

Bibtex »Metadata »Paper »

Authors

Abhiram Gnanasambandam, Stanley Chan

Abstract

When using deep neural networks for estimating signals such as denoising images, it is generally preferred to train one network and apply it to all noise levels. The de facto training protocol to achieve this goal is to train the network with noisy samples whose noise levels are uniformly distributed across the range of interest. However, why should we allocate the samples uniformly? Can we have more training samples that are less noisy, and fewer samples that are more noisy? What is the optimal distribution? How do we obtain such optimal distribution? The goal of this paper is to address these questions. In particular, we show that the sampling problem can be formulated as a minimax risk optimization. We show that, although the neural networks are non-convex, the minimax problem itself is convex. We derive a dual ascent algorithm to determine the optimal distribution of which the convergence is guaranteed. We show that the framework is general not only to denoising but any trainable estimators where there is a range of uncertainty conditions. We demonstrate applications in image denoising, low-light reconstruction, and super-resolution.