Mathematics of Deep Learning Seminar: Stefanie Jegelka

Date & Time


Title: Learning in Graph Neural Networks

Abstract: Graph Neural Networks (GNNs) have become a popular tool for learning representations of graph-structured inputs, with applications in computational chemistry, recommendation, pharmacy, reasoning, and many other areas.
In this talk, I will show some recent results on learning with message-passing GNNs. We begin with generalization bounds for GNNs that show relations to RNNs. Next, we look at inductive biases from the perspective of relating the architectural structure to the structure of the task. In particular, although many networks may be able to represent a task, some architectures learn it better than others. We study these relations within and out of the training distribution, and show empirical and theoretical results, as well as future directions.

This talk is based on joint work with Keyulu Xu, Jingling Li, Mozhi Zhang, Simon S. Du, Ken-ichi Kawarabayashi, Vikas Garg and Tommi Jaakkola.

Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates