Machine Learning at the Flatiron Institute Seminar: Siamak Ravanbaksh

Date


Title: Symmetry, Beyond Invariant Networks

Abstract: Invariant and equivariant networks have been the primary mechanism for imbuing machine learning with symmetry awareness. However, constraining architectures with symmetry may be infeasible or even undesirable. Infeasibility may be due to the inability of network design or lack of information on transformations, and undesirability may be due to the approximate nature of symmetries or suboptimal use of computing. In this talk, I’ll briefly review several works from our group that use symmetries beyond equivariant networks. These examples explore symmetry in different learning paradigms ranging from recent and ongoing works on generative modelling to prior works on self-supervised learning, physics-informed learning and reinforcement learning.

Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates

privacy consent banner

Privacy preference

We use cookies to provide you with the best online experience. By clicking "Accept All," you help us understand how our site is used and enhance its performance. You can change your choice at any time here. To learn more, please visit our Privacy Policy.