Could Machines Learn Like Humans?

  • Speaker
  • Yann LeCunFacebook AI Research
    New York University
Date & Time


About Presidential Lectures

Presidential Lectures are free public colloquia centered on four main themes: Biology, Physics, Mathematics and Computer Science, and Neuroscience and Autism Science. These curated, high-level scientific talks feature leading scientists and mathematicians and are intended to foster discourse and drive discovery among the broader NYC-area research community. We invite those interested in the topic to join us for this weekly lecture series.
Video Thumbnail

By clicking to watch this video, you agree to our privacy policy.

Deep learning has enabled significant progress in computer perception, natural language understanding and control. However, almost all these successes largely rely on supervised learning, where the machine is required to predict human-provided annotations, or on model-free reinforcement learning, where the machine learns actions to maximize rewards. Supervised learning requires a large number of labeled samples, making it practical only for certain tasks. Reinforcement learning requires a very large number of interactions with the environment (and many failures) to learn even simple tasks. In contrast, animals and humans seem to learn vast amounts of task-independent knowledge about how the world works through mere observation and occasional interactions. Learning new tasks or skills require very few samples or interactions with the world: we learn to drive and fly planes in about 30 hours of practice with no fatal failures. What learning paradigm do humans and animal use to learn so efficiently?

In this lecture, Yann LeCun will propose the hypothesis that self-supervised learning of predictive world models is an essential missing ingredient of current approaches to AI. With such models, one can predict outcomes and plan courses of actions. One could argue that prediction is the essence of intelligence. Good predictive models may be the basis of intuition, reasoning and “common sense,” allowing us to fill in missing information: predicting the future from the past and present or inferring the state of the world from noisy percepts. After a brief presentation of the state of the art in deep learning, he will discuss some promising principles and methods for self-supervised learning.

About the Speaker

Yann LeCun is Vice President and Chief AI Scientist at Facebook and Silver Professor at New York University affiliated with the Courant Institute and the Center for Data Science. He was the Founding Director of Facebook AI Research and of the NYU Center for Data Science. He received an EE diploma from ESIEE in Paris in 1983, a Ph.D. in computer science from Université Pierre et Marie Curie in Paris in 1987. After a postdoc at the University of Toronto, he joined AT&T Bell Laboratories. He became head of the Image Processing Research Department at AT&T Labs-Research in 1996 and joined NYU in 2003 after a short tenure at the NEC Research Institute. In late 2013, LeCun became Director of AI Research at Facebook, while remaining on the NYU Faculty part-time. He was a visiting professor at Collège de France in 2016. His research interests include machine learning and artificial intelligence, with applications to computer vision, natural language understanding, robotics, and computational neuroscience. He is best known for his work in deep learning and the invention of the convolutional network method which is widely used for image, video and speech recognition. He is a member of the US National Academy of Engineering, the recipient of the 2014 IEEE Neural Network Pioneer Award, the 2015 IEEE Pattern Analysis and Machine Intelligence Distinguished Researcher Award, the 2016 Lovie Award for Lifetime Achievement, the University of Pennsylvania Pender Award, and honorary doctorates from IPN, Mexico and EPFL.

Advancing Research in Basic Science and MathematicsSubscribe to our newsletters to receive news & updates