Mathematics of Deep Learning
The efficiency and limitations of deep learning raise profound questions in high-dimensional statistics, probability, optimization, harmonic analysis, geometry and scientific computing. The underlying mathematics remains mostly not understood, which limits the robustness and validation of applications in critical domains such as autonomous driving, medicine or hard sciences.
The goal of this project is to create an interacting space for a research community developing mathematics and proving theorems, which are relevant to real high-dimensional applications. We shall emphasize applications to sciences: physics, biology and neurosciences. Working on algorithm development and numerical simulations is an important part of this research, to specify problems and evaluate the relevance of mathematical results.
Organizers: Joan Bruna and Stephane Mallat
This research collaboration involves members of all Flatiron Institute centers and several university teams. Interactions are organized around a weekly seminar and problem session, every Tuesday from 11 a.m. to 12 p.m. ET. One or two researchers will expose their vision of important topics and raise research problems that will be discussed. The objective is to promote new collaborations between members of Flatiron centers and university teams. Workshops will be organized at the Flatiron Institute and groups of researchers will be invited to work at CCM on particular projects.
See full schedule here.
The following non-exclusive list of topics are covered:
1- Approximation and optimization of single hidden-layer neural networks
2- Depth and scale separation in deep networks
3- Optimisation with and beyond gradient descent, in deep networks.
4- Geometry, invariance and graphs in deep networks
5- Deep nets for scientific machine learning, PDEs and inverse problems