Rishi Sonthalia Talk

Date & Time


 

Bio: Rishi Sonthalia is currently a Hedrick Assistant Adjunct Professor at UCLA under Andrea Bertozzi, Jacob Foster, and Guido Montufar. He obtained by Ph.D. in Applied and Interdisciplinary Mathematics from the University of Michigan. His advisors were Anna C. Gilbert and Raj Rao Nadakuditi. He did his undergraduate degree at Carnegie Mellon University where he obtained a B.S. in Discrete Math and Computer Science.

Title: Surprises in Structured denoising and Double Descent for Linear Models
Abstract: In this talk, we look at the problem of structured denoising for low rank data using linear models. First, when training an unregularized denoising feedforward neural network, we empirically show that the generalization error versus number of training data points is a double descent curve. We formalize the question of how many training data points and the amount of noise should be used by looking at the generalization error for denoising noisy test data. Prior work on computing the generalization error focus on adding noise to target outputs. However, adding noise to the input is more in line with current pre-training practices. In the linear (in the inputs) regime, we provide an asymptotically exact formula for the generalization error for rank r data.
From this, we derive a formula for the amount of noise that needs to be added to the training data to minimize the denoising error. This results in the emergence of a shrinkage phenomena for improving the performance of denoising DNNs by making the training SNR smaller than test SNR. Further, we see that the amount of shrinkage (ratio of train to test SNR) follows a double descent curve as well.

Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates