Mathematics of Deep Learning Seminar: Eero Simoncelli

Date & Time


Title: Solving Linear Inverse Problems Using the Prior Implicit in a Denoiser

Abstract: Prior probability models are a central component of many image processing problems, but density estimation is a notoriously difficult problem for high dimensional signals such as photographic images. Deep neural networks have provided impressive solutions for problems such as denoising, which implicitly rely on a prior probability model of natural images. I’ll describe our progress in understanding the nature of this implicit prior, and in using it as a substrate for solving any linear inverse problem. We rely on a little-known statistical result due to Miyasawa (1961), who showed that the least-squares solution for removing additive Gaussian noise can be written directly in terms of the gradient of the log of the noisy signal density. We use this fact to develop a stochastic coarse-to-fine gradient ascent procedure for drawing high-probability samples from the implicit prior embedded within a CNN trained to perform blind (i.e., unknown noise level) least-squares denoising. A generalization of this algorithm to constrained sampling provides a method for using the implicit prior to solve any linear inverse problem, with no additional training. We demonstrate this general form of transfer learning in multiple applications, using the same algorithm to produce high-quality solutions for deblurring, super-resolution, inpainting, and compressive sensing.

Joint work with Zahra Kadkhodaie, Sreyas Mohan, and Carlos Fernandez-Granda

Talk Slides

Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates