The Mathematics of Mind and Brain

Date & Time


About Presidential Lectures

Presidential Lectures are free public colloquia centered on four main themes: Biology, Physics, Mathematics and Computer Science, and Neuroscience and Autism Science. These curated, high-level scientific talks feature leading scientists and mathematicians and are intended to foster discourse and drive discovery among the broader NYC-area research community. We invite those interested in the topic to join us for this weekly lecture series.
Video Thumbnail

By clicking to watch this video, you agree to our privacy policy.

The mind and brain can be thought of as computational systems — but what kinds of computations do they carry out, and what kinds of mathematics can best characterize these computations? The last sixty years have seen several prominent proposals: the mind/brain should be viewed as a logic engine, or a probability engine, or a high-dimensional vector processer, or a nonlinear dynamical system. Yet none of these proposals appears satisfying on its own. The most important lessons learned concern the central role of mathematics in bridging different perspectives and levels of analysis — different views of the mind, or how the mind and the brain relate — and the need to integrate traditionally disparate branches of mathematics and paradigms of computation in order to build these bridges.

I will discuss three case studies in integration, two recent successes and one that is more wide open. The first success has come in bridging two different ways to descibe the mind, as a logic engine (dominant from the 1950s through the 1970s), and as a probability engine (dominant since the 1990s). The recent development of probabilistic programs offers a way to combine the expressiveness of symbolic logic for representing abstract and composable knowledge with the capacity of probability theory to support useful inferences and decisions from incomplete and noisy data. Probabilistic programs let us build the first quantitatively predictive mathematical models of core capacities of human common-sense thinking: intuitive physics and intuitive psychology, or how people reason about the dynamics of objects and infer the mental states of others from their behavior. A second success has come from using the mathematics of probability to bridge the cognitive and neural levels of analysis, unifying models of inference and decision in mind and brain. But what we do not yet understand is how to connect the mathematics of logic, symbols, and programs, essential for describing knowledge at the cognitive level, with the mathematics of high-dimensional vector spaces and nonlinear dynamics that has been most influential in describing how the brain learns and computes. How can symbols and logic be embedded in or effectively emerge from the mathematics of vector spaces and dynamical systems? This is the twenty-first century version of the mind-body problem, and arguably the greatest outstanding theoretical question in neuroscience.

Suggested Reading:

How to Grow a Mind: Statistics, Structure, and Abstraction

About the Speaker:

Josh Tenenbaum is a Professor in the Department of Brain and Cognitive Sciences at the Massachusetts Institute of Technology. Tenenbaum and his colleagues in the Computational Cognitive Science group study one of the most basic and distinctively human aspects of cognition: the ability to learn so much about the world, rapidly and flexibly. Given just a few relevant experiences, even young children can infer the meaning of a new word, the hidden properties of an object or substance, or the existence of a new causal relation or social rule. These inferences go far beyond the data given: after seeing three or four examples of “horses”, a two-year-old will confidently judge whether any new entity is a horse or not, and she will be mostly correct, except for the occasional donkey or camel.

We want to understand these everyday inductive leaps in computational terms. What is the underlying logic that supports reliable generalization from so little data? What are its cognitive and neural mechanisms, and how can we build more powerful learning machines based on the same principles?

These questions demand a multidisciplinary approach. Tenenbaum and his group’s research combines computational models (drawing chiefly on Bayesian statistics, probabilistic generative models, and probabilistic programming) with behavioral experiments in adults and children. Their models make strong quantitative predictions about behavior, but more importantly, they attempt to explain why cognition works, by viewing it as an approximation to ideal statistical inference given the structure of natural tasks and environments.

While their core interests are in human learning and reasoning, they also work actively in machine learning and artificial intelligence. These two programs are inseparable: bringing machine-learning algorithms closer to the capacities of human learning should lead to more powerful AI systems as well as more powerful theoretical paradigms for understanding human cognition.

Current research in Ketterle’s group explores the computational basis of many aspects of human cognition: learning concepts, judging similarity, inferring causal connections, forming perceptual representations, learning word meanings and syntactic principles in natural language, noticing coincidences and predicting the future, inferring the mental states of other people, and constructing intuitive theories of core domains, such as intuitive physics, psychology, biology, or social structure.

Homepage: http://web.mit.edu/cocosci/josh.html

Advancing Research in Basic Science and MathematicsSubscribe to our newsletters to receive news & updates