2022 Mathematical and Scientific Foundations of Deep Learning Annual Meeting

Date & Time


Location

Gerald D. Fischbach Auditorium
160 5th Ave
New York, NY 10010 United States

View Map

Thurs.: 8:30 AM—5 PM
Fri.: 8:30 AM—2 PM

Registration Closed

Invitation Only

Participation is by invitation only. All participants must register. Your password and registration group have been sent via email.

Conference Organizers:
Peter Bartlett, University of California, Berkeley
Rene Vidal, Johns Hopkins University

This meeting will bring together members of the NSF-Simons Research Collaborations on the Mathematical and Scientific Foundations of Deep Learning (MoDL) and of projects in the NSF program on Stimulating Collaborative Advances Leveraging Expertise in the Mathematical and Scientific Foundations of Deep Learning (SCALE MoDL). The focus of the meeting is the set of challenging theoretical questions posed by deep learning methods and the development of mathematical and statistical tools to understand their success and limitations, to guide the design of more effective methods, and to initiate the study of the mathematical problems that emerge. The meeting aims to report on progress in these directions and to stimulate discussions of future directions.

  • Agendaplus--large

    THURSDAY, SEPTEMBER 29

    8:30 AMCHECK-IN & BREAKFAST
    9:30 AMEdgar Dobriban | T-Cal: An Optimal Test for the Calibration of Predictive Models
    10:30 AMBREAK
    11:00 AMSasha Rakhlin | Decision-Making Without Confidence or Optimism: Beyond Linear Models
    12:00 PMLUNCH
    1:00 PMRené Vidal | Semantic Information Pursuit for Explainable AI
    2:00 PMBREAK
    2:30 PMAndrea Montanari | From Projection Pursuit to Interpolation Thresholds in Small Neural Networks
    3:30 PMBREAK
    4:00 PMShankar Sastry | Human Automation Teams in Societal Scale Systems
    5:00 PMDAY ONE CONCLUDES

    FRIDAY, SEPTEMBER 30

    8:30 AMCHECK-IN & BREAKFAST
    9:30 AMElisabetta Cornacchia | Learning with Neural Networks: Generalization, Unseen Data and Boolean Measures
    10:30 AMBREAK
    11:00 AMSoledad Villar | Invariances and Equivariances in Machine Learning
    12:00 PMLUNCH
    1:00 PMGal Vardi | Implications of the Implicit Bias in Neural Networks
    2:00 PMMEETING CONCLUDES
  • Abstractsplus--large

    Elisabetta Cornacchia

    Learning with Neural Networks: Generalisation, Unseen Data and Boolean Measures

    We consider the learning of logical functions with gradient descent (GD) on neural networks. We introduce the notion of “Initial Alignment’’ (INAL) between a neural network at initialization and a target function and prove that if a network and target do not have a noticeable INAL, then noisy gradient descent on a fully connected network with i.i.d. initialization cannot learn in polynomial time. Moreover, we prove that on symmetric neural networks, the generalization error can be lower-bounded in terms of the noise-stability of the target function, supporting a conjecture made in prior works. We then show that in the distribution shift setting, when the data withholding corresponds to freezing a single feature, the generalisation error admits a tight characterisation in terms of the Boolean influence for several relevant architectures. This is shown on linear models and supported experimentally on other models such as MLPs and Transformers. In particular, this puts forward the hypothesis that for such architectures and for learning logical functions, GD tends to have an implicit bias towards low-degree representations.

    Based on two joint works with E. Abbé, S. Bengio, J. Hązła, J. Kleinberg, A. Lotfi, C. Marquis, M. Raghu, C. Zhang.
     

    Alexander Rankhlin

    A fundamental challenge in interactive learning and decision making, ranging from bandit problems to reinforcement learning, is to provide sample-efficient learning algorithms that achieve near-optimal regret. Characterizing the statistical complexity in this setting is

    challenging due to the interactive nature of the problem. The difficulty is compounded by the use of complex non-linear models, such as neural networks, in the decision-making context. We present a complexity measure that is proven to be both necessary and sufficient for interactive learning, as well as a unified algorithm design principle. This complexity measure is inherently information-theoretic, and it unifies a number of existing approaches — both Bayesian and frequentist.
     

    Shankar Sastry

    Human Automation Teams in Societal Scale Systems

    Opportunities abound for the transformation of societal systems using new technologies and business models to address some of the most pressing problems in diverse sectors such as energy, transportation, health care, manufacturing and financial systems. Most notably, the integration of IoT, AI/ML and Cloud Computing.   The issues of transforming societal systems is accompanied by issues of economic models for transformation, privacy, (cyber) security and fairness considerations. Indeed, the area of “mechanism design” for societal scale systems is a key feature in transitioning the newest technologies and providing new services.  Crucially,  human beings interact with automation and change their behavior in response to incentives offered to them. Training, Learning and Adaptation in Human Automation Teams (HAT) is one of the most engaging problem in AI/ML systems today. In this talk , I will present a few vignettes: how to align societal objectives with Nash equilibria using suitable incentive design, proofs of stability of decentralized decision making while learning preferences.
     

    Gal Vardi

    Implications of the implicit bias in neural networks

    When training large neural networks, there are generally many weight combinations that will perfectly fit the training data. However, gradient-based training methods somehow tend to reach those which generalize well, and understanding this “implicit bias” has been a subject of extensive research. In this talk, I will discuss recent works which show several implications of the implicit bias (in homogeneous neural networks trained with the logistic loss): (1) In shallow univariate neural networks the implicit bias provably implies generalization; (2) By using a characterization of the implicit bias, it is possible to reconstruct a significant fraction of the training data from the parameters of a trained neural network, which might shed light on representation learning and memorization in deep learning, but might also have negative potential implications on privacy; (3) In certain settings, the implicit bias provably implies convergence to non-robust networks, i.e., networks which are susceptible to adversarial examples.

    Based on joint works with Niv Haim, Itay Safran, Gilad Yehudai, Michal Irani, Jason D. Lee, and Ohad Shamir.

  • Accommodationsplus--large

    Group A – PIs & Speakers
    Business-class airfare for flights over 5 hours
    Hotel accommodations for up to 3 nights
    Reimbursement of Local Expenses

    Group B – Out-of-town Participants
    Economy Airfare
    Hotel Accommodations for up to 3 nights
    Reimbursement of Local Expenses

    Group C – Local Participants
    No funding provided besides hosted conference meals.

    Group D – Remote Participants
    A Zoom link will be provided.

    Personal Car
    For participants in Groups A & B driving to Manhattan, the James NoMad hotel offers valet parking. Please note there are no in-and-out privileges when using the hotel’s garage; therefore, participants are encouraged to walk or take public transportation to the Simons Foundation.

  • Hotelplus--large

    Participants in Groups A & B who require accommodations are hosted by the foundation for a maximum of three nights at The James NoMad Hotel. Any additional nights are at the attendee’s own expense. To arrange accommodations, please register at the link above.

    The James NoMad Hotel
    22 E 29th St
    New York, NY 10016
    (between 28th and 29th Streets)
    https://www.jameshotels.com/new-york-nomad/

    For driving directions to The James NoMad, please click here.

  • COVID-19 Policyplus--large

    ALL in-person meeting attendees must be vaccinated against the COVID-19 virus with a World Health Organization approved vaccine, be beyond the 14-day inoculation period of their final dose, and provide proof of vaccination upon arrival to the conference. Acceptable vaccines can be found at the bottom of this page on the WHO’s site.

  • Reimbursement and Travel Policyplus--large

    Any expenses not directly paid for by the Simons Foundation are subject to reimbursement based on the foundation’s travel policy. An email will be sent within a week following the conclusion of the meeting with further instructions on submitting your expenses via the foundation’s web-based expense reimbursement platform.
    Receipts are required for any expenses over $50 USD and are due within 30 days of the conclusion of the meeting. Should you have any questions, please contact Emily Klein.

  • Contactsplus--large

    Registration and Travel Assistance
    Ovation Travel Group
    sfnevents@ovationtravel.com
    (917) 408-8384 (24-Hours)
    www.ovationtravel.com

    Meeting Questions and Assistance
    Emily Klein
    Event Coordinator, MPS, Simons Foundation
    eklein@simonsfoundation.org
    (646) 751-1262

Subscribe to MPS announcements and other foundation updates