James DiCarlo, M.D., Ph.D.

Peter De Florez Professor of Neuroscience; Director MIT Quest for Intelligence; McGovern Institute for Brain Research Investigator, Massachusetts Institute of TechnologyJames DiCarlo’s website

James DiCarlo is the Peter De Florez Professor of Neuroscience, Director of the MIT Quest for Intelligence, and is a McGovern Institute for Brain Research Investigator at the Massachusetts Institute of Technology (MIT). He received his Ph.D. in biomedical engineering and his M.D. from Johns Hopkins University in 1998 and did his postdoctoral training in primate visual neurophysiology at Baylor College of Medicine. He joined the MIT faculty in 2002. He is an Alfred P. Sloan Research Fellow, a Pew Scholar in the Biomedical Sciences and a McKnight Scholar in Neuroscience.

The research goal of DiCarlo’s group is a computational understanding of the brain mechanisms that underlie object recognition. His group is currently focused on understanding how transformations carried out by a series of neocortical processing stages — called the primate ventral visual stream — are effortlessly able to untangle object identity from other latent image variables such as object position, scale and pose. He and his collaborators have shown that populations of neurons at the highest cortical visual processing stage (IT) rapidly convey explicit representations of object identity, even in the face of naturally occurring image variability. His group has found that the ventral stream’s ability to accomplish this feat is rapidly reshaped by natural visual experience, and they can now monitor the neuronal substrates of this learning online. This points the way toward understanding how the visual system uses the statistics of the visual world to “learn” neuronal representations that automatically untangle object identity. He and his collaborators have also shown how carefully designed visual recognition tests can be used to discover new, high-performing bio-inspired algorithms, and to efficiently explore the hypothesis space of possible cortical algorithms.

His group is currently using a combination of large-scale neurophysiology, brain imaging, optogenetic methods and high-throughput computational simulations to understand the neuronal mechanisms and fundamental cortical computations that underlie the construction of these powerful image representations. They aim to use this understanding to inspire and develop new machine vision systems, to provide a basis for new neural prosthetics (brain-machine interfaces) in order to restore or augment lost senses, and to provide a foundation upon which the community can understand how high-level visual representation is altered in human conditions such as agnosia, autism and dyslexia.

Current Project: Neural computations for visual form processing and form-based cognition

Past Project: Modulating the dynamics of human-level neuronal object representations

Advancing Research in Basic Science and MathematicsSubscribe to SCGB announcements and other foundation updates