Liang Zhou is a Ph.D. student at the Gatsby Computational Neuroscience Unit at University College London. He double majored in neuroscience and computer science at MIT, where he worked on cognitive models of responsibility in intuitive physics. He then hopped across the pond to the UK on a Marshall Scholarship, where he has made no progress on a British accent but has discovered that he likes tea. Liang grew up in southern California and misses the sun dearly in London. In his free time, he enjoys running, dance, dark chocolate, and reading (Reddit, but also real books sometimes).
Principal Investigator: Peter Latham
Fellow: Paul Okeahalam
Humans can efficiently learn to solve a wide range of tasks. A popular approach to model task learning is to train deep recurrent neural networks (RNNs), with the hope that network activity is similar to that of real neurons and real behavior. However, this typically requires lots of data, doesn’t work well with multiple tasks, and is biologically implausible. Our goal is to fix these issues by selectively learning the input and output representations of a randomized chaotic RNN, which allows it to flexibly perform an array of neuroscientifically interesting tasks. We’ll also look at our new and improved model from a dynamical systems perspective, with the aim of understanding how it reconfigures inherently chaotic dynamics across a variety of computational goals.