Scott Linderman

Scott Linderman

Scott Linderman4

Scott Linderman is a postdoctoral fellow at Columbia University in the labs of Liam Paninski and Dave Blei. He graduated with a B.S. in electrical and computer engineering from Cornell University in 2008. After working as a software engineer at Microsoft for three years, he completed his Ph.D. in computer science at Harvard University in 2016, under the supervision of Ryan Adams and Leslie Valiant. He works at the intersection of computational neuroscience and machine learning, developing structured models and scalable inference algorithms to decipher large-scale multineuronal recordings.



“Discovering Latent States of Neural Activity and Behavior”

For much of the history of neuroscience, neural activity was observed one neuron at a time or a handful of neurons at a time. Now, thousands of neurons can be observed simultaneously, and their activity can be correlated with precise measurements of an animal’s repertoire of behaviors. These developments offer an unprecedented opportunity to link the complex dynamics of neural activity to natural behavior, but the massive data sets generated by new techniques pose extraordinary statistical and computational challenges. It is unclear, for instance, how to formulate a joint model of neural activity and behavior when their relationship could, in principle, be infinitely complex. Fortunately, preliminary observations of large-scale neural recordings reveal that groups of neurons reliably participate together during behaviors, greatly reducing the realm of possibilities. But how this reduction occurs is an open question. One obstacle is a lack of sophisticated statistical and computational tools. I have developed models known as “switching linear dynamical systems,” which can identify both discrete and continuous latent states of neural activity and behavior. For example, these models can learn to segment movies of mouse behavior into interpretable “syllables,” like turning right, turning left, or rearing up on its hind legs. Moreover, these models can discover the continuous dynamics of neural activity that underlie these behavioral syllables. Because these models are computationally intensive to fit—prohibitively so using traditional methods—I have developed novel algorithms to scale these models to modern data sets. Models such as these will allow for the design of new experiments in which neural activity is manipulated in real time, leading to causal explanations of brain and behavior.