This seminar and working session is held every Tuesday from 11 a.m.-12 p.m. ET. It is associated with the project on Mathematics for Deep Learning. The goal of each session is to present a topic and open research questions in this area. It will serve as a backbone for the formation of spontaneous working groups around some of these questions.
Each session is either a presentation of 40 minutes followed by a discussion of 20 minutes or two presentations of 20 minutes followed by discussions of 10 minutes each. The slides and a bibliography of few papers is provided.
Fall 2021 Schedule
|September 14, 2021||Emmanuel Candes||Reliable Predictions? Counterfactual Predictions? Equitable Treatment? Some Recent Progress in Predictive Inference|
|September 21, 2021||Alberto Bietti||On the Sample Complexity of Learning under Invariance and Geometric Stability|
|October 19, 2021||Michael Lindsey||Tools for multimodal sampling|
|October 26, 2021||René Vidal||Learning dynamics and implicit bias of gradient flow in overparameterized linear models|
Spring 2021 Schedule
Date Speaker Title Slides January 19, 2021 Francis Bach Towards a quantitative analysis of gradient descent for infinitely wide two-layer neural networks January 26, 2021 Gerard Ben Arous Some naive open problems about optimization in high dimensions: how much data is needed to tame topological complexity and entropy with simple algorithms? February 2, 2021 Florent Krzakala Teacher-student models: exactly solvable models for statistical learning February 9, 2021 Lenka Zdeborova Gradient-based algorithms in high-dimensions under limited sample complexity February 16, 2021 Stephane Mallat Harmonic Analysis View of Deep Network Concentration Talk Slides February 23, 2021 Jianfeng Lu Towards numerical analysis for solving high dimensional PDEs based on neural networks March 2, 2021 Risi Kondor The representation theory of equivariant neural networks and graph neural nets March 9, 2021 Eric Darve Physics-informed machine learning: open mathematical questions March 16, 2021 Guillermo Sapiro Is Deep Learning getting better? March 23, 2021 Spring Summit March 30, 2021 Grant Rotskoff Sampling with neural networks: prospects and perils April 6, 2021 Sebastian Bubeck A law of robustness for two-layers neural networks April 20, 2021 Rong Ge A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network April 27, 2021 Rachel Ward Generalization bounds for sparse random feature expansions May 4, 2021 Julien Mairal Trainable Algorithms for Inverse Imaging Problems Talk Slides May 11, 2021 Ohad Shamir Elephant in the Room: Non-Smooth Non-Convex Optimization May 18, 2021 Bin Yu Disentangled interpretations for deep learning with ACD May 25, 2021 Tom Goldstein Breaking Machine Learning Systems at the Industrial Scale June 1, 2021 Leon Bottou Learning Representations Using Causal Invariance June 8, 2021 Sanjeev Arora Trajectory, Trajectory, Trajectory Talk Slides June 15, 2021 Yi Ma Deep (Convolution) Networks from First Principles Talk Slides
Fall 2020 Schedule
Date Speaker Title Slides November 10, 2020 David Donoho (Stanford), Chair Stephane Mallat (CCM) Prevalence of Neural Collapse during the terminal phase of deep learning training November 17, 2020 Joan Bruna (Flatiron CCM, NYU), Chair TBA On depth separation for neural networks November 24, 2020 Stefanie Jegelka (MIT), Chair Joan Bruna Learning in Graph Neural Networks December 1, 2020 Eric Vanden-Eijnden (NYU) December 8, 2020 Eero Simoncelli (Flatiron CCN, NYU), Chair Stephane Mallat Solving Linear Inverse Problems Using the Prior Implicit in a Denoiser Talk Slides December 15, 2020 Lexing Ying (Stanford), Chair TBA Solving Inverse Problems with Deep Learning