Supported Projects

Analyzing a complex motor act at the mesoscopic scale

Our motor systems are impressive. We can learn to effortlessly produce highly skilled muscle movements, such as hitting a golf ball or playing the violin. Our brains must step through sequential patterns of activity that enable these kinds of behaviors, but we know very little about how this is accomplished.

To investigate this issue, our laboratory studies another skilled, complex motor behavior: the courtship song of the zebra finch. This song is learned from the bird’s father in a process that parallels speech acquisition in infants. Yet, unlike speech production, much is known about the neural circuits that produce singing behavior. The song is driven by a network of brain structures, one of which contains the circuitry for generating the sequence of syllables that compose the song. Different neurons in that region show a flurry of activity at different points in time during each song. Little is known about how the song is represented in these neurons, in part because we lack methods to record from multiple neurons simultaneously in this brain area. Using 2-photon microscopy, we have developed a method to watch the neurons in the brain of a zebra finch while they produce the song. Our lab will work in close collaboration with theoretical neuroscientist Liam Paninski to develop quantitative methods to analyze these data. Our results are already beginning to shed light onto how complex, learned motor acts are generated not just in birdsong, but also in other complex movements both in health and disease.

FULL PROJECT

More
Attentional modulation of neuronal variability constrains circuit models

Each time you look at a picture on a wall, even though you see the same thing, the electrical activity of neurons in your brain’s visual areas will be slightly different. These slight differences are known as variability, and typically researchers seek to remove variability from their data by taking the average of neural responses over many trials of whatever task they are studying. Then, they build theoretical models based on the averaged data.

However, researchers have run into trouble with this approach: many different models seem to explain the same data, making it difficult to determine which model is correct. We propose a radically different approach in which we embrace the variability, and use it to constrain new theoretical models. We will use visual attention to test our model, in which observers focus on particular parts of a complex scene. We will then record the activity of large groups of neurons, and examine how visual attention modulates the variability in those neurons. The goals of our research are threefold. Can our model account for the existing data on how attention changes variability? Can we conduct experiments to determine how attention affects the extent to which variability is shared among neurons? Can we extend our model to this new data we collect? By focusing on variability, we can overcome many of the shortcomings of previous models, establishing a new approach for studying neural circuits in the visual system and other areas of the brain.

FULL PROJECT

More
Catching fireflies: Dynamic neural computations for foraging

Imagine catching a firefly. There are multiple steps: seeing the firefly, estimating its location in between flashes, deciding when to move to catch it, and then moving. Each step engages different brain areas—for example, seeing the firefly activates visual processing regions whereas moving activates motor systems—yet how these areas interact to produce an action based on sensory experience is a mystery.

We have designed an experiment to get at these interactions using, it turns out, virtual fireflies. Working in monkeys, we trained the animals to forage through a virtual environment for flashing specks of light. As in real life, such a virtual task engages a variety of brain regions, including those involved in sensory and perceptual processing, navigation, decision-making, and movement. While previous work has studied each of these brain regions individually, we have taken the research one step further to study many regions at the same time. Using sophisticated recording technology, we simultaneously monitor the electrical activity of neurons in each brain region, allowing us to observe the flow of information from brain region to brain region as the animal performs the foraging task. Because even the simplest tasks require the precise coordination of many neural networks distributed across many different brain areas, this approach will be broadly applicable to studying many tasks the brain performs.

FULL PROJECT

More
Circuit Mechanisms of Social Cognitive Computations

While we may take for granted social tasks such as inferring what others are thinking and feeling, or predicting someone’s next move, they pose considerable processing challenges for the brain. Social cognition is incredibly complex, with accurate judgments often made with very little evidence, yet we perform these tasks automatically, virtually every moment in time.

One skill at the core of social cognition is facial recognition, itself a complex task, which is found in several species of primates, including humans. Face recognition is supported by a dedicated face-processing network, which encompasses many of the organizational features of the brain as a whole. Because it is so closely devoted to the processing of the diverse social cues the face conveys, studying this network is an ideal entry point to investigate the neural mechanisms of social cognition. We will study activity in this model system at many different levels. In our experiments, we will measure the activity of individual neurons, or even parts of neurons, and we will measure the activity of local networks of neurons within a face-selective brain region. We will develop cutting-edge machine-learning techniques to analyze and understand this high-dimensional data and to create models of how this network processes information. This work will allow us not only to understand how faces are processed, but how neurons interact to generate the key characteristics of social cognition. With this work we also aim to lay the foundation for future work in autism models to understand the neural mechanisms underlying the alterations of social information processing in this condition.

FULL PROJECT

More
Communication between neural populations: circuits, coding, and behavior

Every time we open our eyes, we take in far too much visual information for our brains to fully process. To cope with this issue, our brains have learned to pick out the most important details in a visual scene and quickly decide how to act on that information.

That process requires multiple components — some parts of the brain encode sensory information, some parts retain this information while making a decision, and some parts decode sensory and cognitive information in order to act upon it. We want to determine how parts of the brain encode information, relay it to one another, and decode it, which is fundamental to proper brain function. We will record the electrical activity in the brains as animals perform a visual decision-making task. Because we can simultaneously record from many neurons in different parts of the brain at once, we can analyze how these different regions communicate with one another. For example, brain activity patterns vary even when perceiving the same stimulus or making the same movement. How does this variation influence how one part of the brain communicates to another? How does communication change depending on the internal state of the animal or the nature of the task? Do these changes enhance learning or other cognitive functions? We will then build computer models of neural networks in the cortex and compare those models to our data. Our models can simulate how brain areas process information before passing it along and how different processing strategies would affect transmission across the brain. We can then compare our predictions to the data observed in our recordings. We anticipate these fundamental discoveries will be broadly applicable to any species, including humans.

FULL PROJECT

More
Computation-through-dynamics as a framework to link brain and behavior

The human brain is a daunting object: It contains nearly 100 billion neurons, each one linked to roughly 10,000 other neurons, making 100 trillion total connections in the brain. To make matters more complicated, each neuron can fire up to hundreds of times per second.

Understanding this blizzard of activity is one of science’s final frontiers. The brain’s basic functions — such as planning, decision-making, and behavior — require multiple brain areas and many thousands or millions of neurons, generating complex patterns of neural activity. Determining what aspects of this ever-changing activity are relevant for the task at hand, and how it ultimately leads to thoughts and behavior, can appear as an insurmountable challenge. One may think that studying very simple behaviors will help us solve this challenge. However, the opposite may true — the solution could lie in making the tasks we study more complex. By studying how the brain solves the same problem in many different settings, our group aims to better distinguish the relevant, invariant components of brain activity — those that are common to all settings — from components that are variable and thus not critical to the task at hand. We seek to develop a mathematical framework that can explain how invariant components emerge from the seeming chaos of neural activity and how their dynamics implement the computations necessary to drive behavior. We refer to this framework as ‘computation-through-dynamics.’ We will ensure our mathematical descriptions remain faithful to the reality of the brain by tightly integrating experimental and theoretical approaches within our group. With this interdisciplinary approach, we hope to make the task of understanding the human brain’s 100 billion neurons a little more tractable.

FULL PROJECT

More
Computational principles of mechanisms underlying cognitive functions

The brain is composed of billions of neurons. Many of these neurons respond to particular features of the sensory inputs. For example, in the visual system, there are neurons that respond when viewing a horizontal edge but not when viewing a vertical one. These types of neurons have been studied for a long time.

However, when the brain is asked to perform more complex tasks than simply observe an individual feature of a visual stimulus, a new class of neuron emerge. These neurons are not so simple: they respond to multiple features of the sensory input and their response is modulated by our expectations, our feelings and more generally by any aspect of our thoughts. Said to have so-called “mixed selectivity,” these neurons were ignored for many years, but, thanks to work from our group and others, their importance is just now being appreciated. That is, even though their responses are not easily interpretable, they play a critical role in solving complex cognitive tasks. In abstract terms, these neurons increase the ability of downstream “output” neurons to generate a much larger set of responses for a given set of inputs. This enhanced ability likely underlies the capacity to perform a complex and flexible set of actions based on the same sensory input. Working in close collaboration with experimental neuroscientist Daniel Salzman, we aim to understand the role of mixed selectivity neurons in processing complex cognitive tasks. We will use technology for simultaneously monitoring the activity of many neurons in multiple brain regions to determine what information is contained in those brain areas and how that information is related to how animals behave. The close link between theory and experiments will facilitate the development of new mathematical tools for analyzing the collective activity of neurons, bringing us one step closer to understanding not just how neurons encode simple stimuli such as horizontal and vertical bars, but perform complex tasks such as decision-making.

FULL PROJECT

More
Cortical mechanisms of multistable perception

Some visual patterns can be seen in more than one way, like the famous Necker cube which reverses its 3D orientation spontaneously. These “multistable” stimuli are especially useful for the study of brain mechanisms of perception, because identical stimuli can give rise to different percepts, giving us direct access to the mental states that underlie perception.

We plan to use this access to explore the areas of the monkey brain that control switches in perceptual state. We will use three-component moving "plaid" patterns because we know that they are processed by a well-studied area, MT, which we have studied in the past. We plan to extend these observations by measuring the activity of multiple neurons simultaneously, and by relating them to the animals' reports of changes in stimulus appearance. MT is certainly not the only brain region involved in perceptual multistability, though it might show related activity. We will therefore also record from areas that are likely to control perceptual switches, investigating how neuronal activity in those areas drives changes in the monkey’s perceptual state. Our work will not only shed light on neural activity in the visual system and its perceptual and behavioral consequences, but will also broader links between brain activity and behavior.

FULL PROJECT

More
Corticocortical Signaling Between Populations of Neurons

How the human brain’s nearly 100 billion neurons interact with each other to give rise to cognition, emotion, and action is a total mystery. Neurons are embedded in dense networks, with thousands of connections to other neurons. These networks have connections at both the local level, between neighboring neurons, and at the global level, as projections between brain areas.

Yet even the basic principles of how all of this complicated networked activity leads to thought and behavior are unknown. As we unravel this mystery, understanding the interactions between brain regions will be critical. One roadblock has been that previous studies have focused on correlations between pairs of neurons or field potentials, each recorded in a different brain area. Yet neurons may also communicate at the level of groups, or populations. Emerging technology now allows us to measure the activity of hundreds of neurons in multiple brain areas simultaneously, and advances in statistical analysis are beginning to decipher this complex population-level activity. In our research, we will focus on this question by studying the visual system of monkeys. We will place electrodes in two brain areas involved in processing visual information, V1 and V2. Neurons in V1 send information about visual scenes to neurons in V2, complicated by some feedback connections as well, with V2 sending information back to V1. With help from sophisticated statistical modeling, we will determine what information is passed along, in which direction, and why. This will lead to experiments in which we compare brain activity between brain areas V1 and V4 while monkeys report decisions based on their visual percepts. By studying and modeling the flow of information across multiple brain areas, our results will provide insight not only into the visual system, but also into how any collection of brain areas cooperate to give rise to perceptions and decisions.

FULL PROJECT

More
Decoding internal state to predict behavior

The behavioral repertoire of any animal is ultimately determined by the activity of its brain cells. Yet, how exactly neural activity leads to actions remains unknown. Which patterns of neuronal activity trigger which behavior? How does an animal string together many actions to accomplish a goal?

Understanding these processes will not be trivial, and will require deep insight into animal behavior, recordings of the activity of many neurons at once, and sophisticated mathematical models. We propose a collaboration among three laboratories to bring these techniques to bear on such fundamental questions in neuroscience. Working in mice, we will use a novel system to record the electrical activity of many neurons at once while simultaneously monitoring a freely-moving mouse’s posture in 3D. We can then use mathematical models to determine how neural activity relates to the mouse’s movements. We will specifically focus on a brain area known as the striatum, and look to see if the mouse’s voluntary choices to move are reflected in neural activity in the striatum. In this way, we can address the fundamental question of how an animal’s behavioral state is represented in neural circuits. Given the similarities in brain structure between mice and humans—though the mouse brain is of course far less complex—we expect our insights to apply to humans as well.

FULL PROJECT

More
Dense Mapping of Dynamics onto Structure in the C. elegans Brain

In the human brain, billions of neurons wired into networks sense the environment; process, store, and retrieve information; and produce thoughts and behavior. However, it’s not understood how the activity of neurons gives rise to these cognitive processes. A major program of neuroscience is to address this question by building so-called “connectomes,” or complete wiring diagrams of the connections among neurons.

These efforts are occurring in a variety of organisms, including humans. However, the projects are challenged by the sheer complexity of the connectomes, even in simple organisms such as fruit flies and mice. There is also a lack of a functional framework to interpret anatomical data from connectomes. The lack of a functional framework is most readily apparent when considering the story of the simple worm known as C. elegans. 25 years ago, the connectome of C. elegans was mapped in exquisite detail— and C. elegans remains the only organism with a complete wiring diagram—yet it is still largely unknown how this brain produces any type of behavior. Taking advantage of the relative simplicity of C. elegans, our lab has developed a new microscopy technology that can, for the first time, capture the real-time activity of every neuron in a whole brain. We will use this platform, coupled with a new experimental approach to arouse the worms into action with complex stimuli, as well as sophisticated computational methods, to unravel the relationship between the connectome and functional activity in the worm brain. This basic “function-from-structure” question is of fundamental significance to the multitude of brain-mapping projects underway in more complex organisms. Our work in the worm will establish a baseline for how useful a connectome is to elucidating brain function, influencing a range of connectome projects, including those involving the human brain.

FULL PROJECT

More
Discovering repeating neural motifs representing sequenced behavior

Imagine a tennis player hitting a forehand, but in slow-motion: There is anticipatory footwork, alignment of the body, preparation of the racquet, the wind-up and swing of the arm, final adjustments as the ball approaches, contact and a follow-through. Sports coaches and players intuitively learn and divide this complex behavior into discrete steps.

Players will drill the footwork, the swing and the follow-through separately. This general principle — that an action can be divided into a series of steps — holds true for any movement we do. However, the smallest meaningful units of behavior remain a mystery. Taking inspiration from linguists, who view spoken words as syllables that can be infinitely recombined, we seek to discover the ‘syllables’ of behavior — and the patterns of brain activity that generate them. We will take advantage of new technology to automatically track the movements and body position of a mouse as it forages around its cage and performs tasks. At the same time, we will monitor the activity of a brain region called the striatum, which is involved in rewards, movements and cognition. Using a new class of mathematical models that do not make many assumptions about how behavior is organized, we will deconstruct the mouse’s normal behavior into motifs, or syllables, and correlate those with brain activity. The striatum is especially interesting to monitor because it receives inputs from the brain’s dopamine system. Recent work suggests that the dopamine system initiates the switch from one behavior to another, providing the basis for how syllables are recombined to create an infinite repertoire of behavior. Our results not only will illuminate how a laboratory mouse behaves but might one day even explain how a professional tennis player hits a forehand.

FULL PROJECT

More
Dissecting navigation and the general logic of episodic state computation

Imagine you are rowing a boat in a river. If the boat stops moving relative to the shore, you might conclude that the river’s current has picked up and decide to row harder. That decision results from a combination of sensory information and internal predictions.

Your brain compared your actual position to your expected position given how fast you were rowing and concluded that the slowing of the boat violated your sensory expectations. Every time we navigate through the world, we not only carry out the current movement but also predict how our sensory environment should change when we move. How the brain does this is largely unknown. We hypothesize that you can use your past position and current velocity to create what we call your ‘episodic state’ — the minimum knowledge of the past and present needed to predict the sensory experience of the immediate future. We will test this hypothesis by recording activity in many neurons, in both mice and flies, as the animals navigate a virtual reality environment. With virtual reality, we can create situations in which the laws of physics hold or are violated. For example, physics dictates that if you move the same amount forward as backward at the same speed, you should end up in the same place. We can break this rule and see how the animal — and its brain — responds. This setup allows us to determine how the brain computes its position and velocity, how it creates a representation of its episodic state and how it uses this information to predict future sensory feedback. This work will provide general insights into how any animal makes predictions, including humans.

FULL PROJECT

More
Dynamical computation in populations — analysis and theory

The brain has a remarkable ability to choose appropriate actions in the face of an ever-changing environment. Even when sensory input is completely new — such as being in a new environment — animals are able to navigate successfully, respond to threats and capture prey.

This ability depends on integrating sensory input with past experience to make decisions and implement behavior. Neuroscientists have long sought to understand how the brain makes complex decisions in uncertain environments. Recent technical advances have produced huge amounts of brain activity data recorded as animal perform these types of tasks, yet neuroscientists have developed relatively few new theories to explain these data. We have established a collaborative group to develop mathematical descriptions of brain activity called ‘dynamical neural models.’ Dynamical neural models help explain how the activity of hundreds or even millions of neurons represent internal variables — a sound, the position of a limb during a movement, the desire to move in the first place, or something else. We will study how groups of neurons pass messages to each other to create these representations, as well as how the brain learns to form them and use them for decisions and actions. Our work will provide much-needed theories of brain function to help interpret the large amounts of data generated by neuroscientists.

FULL PROJECT

More
Dynamics of neural circuits, representation of information, and behavior

How the brain holds information in short-term memory is a mystery. Individual neurons can only hold information for a short period of time, much shorter than what would be required to store a memory. So how do brains maintain short-term memories?

A number of theoretical models propose that the key is in the activity not in single neurons but of populations of neurons. Working in mice, we have developed a simple task to test whether groups of neurons maintain memories. Mice are allowed to explore an object with their whiskers. Based on this sensory evidence gathered by whisking, the mice decide about the object’s location, which they will later indicate by licking in one direction or another. A key aspect of the experiment is that we impose a short waiting period of a few seconds between when the mouse decides where the object is and when the mouse licks to indicate its decision. In this manner, the mouse must hold the decision in its brain for a few seconds: in other words, we’ve trained the mouse to create a short-term memory. Then, using technology that allows us to monitor groups of neurons simultaneously and manipulate their activity, we can test whether these neuronal populations can indeed hold the information relevant to a memory. Many theoretical models propose ways in which the collective activity of neurons holds memories, and our experimental set-up will allow us to test the evidence gained from neurons against these theoretical models. These results will not only shed light on how memory is stored in neural circuits, but will explore the much more general question of how neural activity represents information in the brain.

FULL PROJECT

More
Global brain states and local cortical processing

The cerebral cortex—the outermost, wrinkled layer of the brain—is involved in many cognitive processes, including sensory processing, decision-making, and movement. It’s generally thought that these tasks are carried by so-called “local” circuitry, which is composed of networks of nearby neurons. However, the brain also experiences what are called “global states,” such as attention, arousal, or motivation.

These global states can affect the processing carried out by local circuits. Exactly how global states influence local circuitry, however, remains a mystery. We plan to investigate these fundamental interactions between local computations and global brain states. Global brain states are characterized by slow oscillations of neuronal activity across the entire cortex—what are commonly referred to as brainwaves. When the brain needs to perform a certain task, however, it carries it out using local circuitry. We hypothesize that when local circuitry is in use to perform a task, the cortex stops oscillating near that local region. To test this hypothesis, we plan to use imaging techniques to visualize global patterns of brain activity while a mouse transitions from one global brain state to another—in this case, the global brain state will be changed by having the mouse engage in a visually demanding task followed by passive viewing. But how do changes in global brain states affect processing in local networks of neurons? One hypothesis is that global brain states affect the responsiveness of neurons. For example, during the visually demanding task, the global brain state may increase the responsiveness of certain neurons related to visual processing. To test this hypothesis, we plan to visualize the activity of local groups of neurons and compare their responsiveness as the mouse transitions from a visually demanding task to passive viewing. Finally, to understand exactly how this brain-state modulation of local processing is carried out in individual neurons, we plan to visualize the responses of one neuron at a time. The hypothesis is that one part of the neuron—termed the apical dendrites—carries information about global brain state to the neuron’s main processing center, termed its soma. Our research will reveal how global patterns of activity affect processing of local circuitry, shedding light on a common motif in neuronal computation.

FULL PROJECT

More
Hidden states and functional subdivisions of fronto-parietal network

The essence of cognition is choice, and to understand choice we need to understand how neurons in the brain generate decisions in complex settings. To make a decision, the brain must combine different sources of information by weighing them based on their relevance.

Past studies have typically gained insight into neural mechanisms of decision-making by recording from one or a small number of neurons at a time and by analyzing the average neural responses across several decisions. However, this approach has its limitations and understanding how decisions are made at the neural level has so far proved challenging. We plan to record from hundreds of neurons simultaneously in multiple brain areas while monkeys make a choice. We expect to be able to predict the monkey’s choices ahead of time based on our large-scale neural recordings. In addition, we will use the data to understand how different functional modules in the brain communicate and coordinate their response to form a decision and execute an action. Our studies will enable better understanding of cognition, and guide development of artificial systems that can mimic human decision-making. Moreover, they will facilitate better treatments for deficits in the decision-making process in cognitive and mental disorders.

FULL PROJECT

More
Higher-Level Olfactory Processing

How do complex patterns of activity in large populations of neurons gain meaning and guide behavior? This question can be addressed in the olfactory systems of fruit flies and mice because, in both of these organisms, odors are represented randomly in key olfactory circuits: the mushroom body in flies and the piriform cortex in mice.

These random representations have no inherent meaning, and thus meaning must be imposed through learning. Study of these systems will involve extensive collaboration between experimental and theoretical researchers. The experimental work will make use of genetic techniques for visualizing and manipulating neural activity. The theoretical work will involve building models to test hypotheses arising from the data and providing predictions that lead to further experimentation. The goal of these studies is to illuminate how initially arbitrary patterns of neural activity gain mean through experience-dependent learning.

FULL PROJECT

More
How the basal ganglia forces cortex to do what it wants

It is well established that multiple brain areas are involved in generating movements, yet individual brain areas are often studied in isolation. A wealth of data indicates that the basal ganglia interact with the cerebral cortex and are important for movements. For example, diseases of the basal ganglia, such as Parkinson’s, cause profound movement disorders.

Understanding the role of the basal ganglia in movements will therefore be critical to understanding movements in general, and to finding cures for movement disorders. Neurons in the basal ganglia and those in the cerebral cortex are intimately connected to each other, indicating that these two regions likely act together during movements. But what is the relationship between neuronal activity in the cortex and neuronal activity in the basal ganglia? Our goal is to understand this relationship in both monkeys and rodents. One idea is that the basal ganglia biases, or forces, the cortex to choose one action over another—especially an action that could lead to a reward. The basal ganglia’s known role in predicting rewards is consistent with such a relationship. We will test this possibility by using technology to record the activity of many neurons at once, and also to manipulate the activity of those neurons while the animals perform various movement-related tasks, such as reaching. In general, our work will shed light on how two major brain areas contribute to movements based on reward, paving the way for insights into movement disorders, such as Parkinson’s.

FULL PROJECT

More
Integrative approaches to understanding whole-brain computation

The human brain is composed of billions of neurons. Currently, neuroscientists understand just a fraction of the electrical and chemical code that neurons use to communicate with one another. One obstacle has been the lack of technologies to simultaneously track the activity of each individual neuron at ‘the speed of thought.’

New techniques, however, are on the verge of accomplishing this in smaller systems. Our group will combine the power of sophisticated computer algorithms with cutting-edge microscopes to monitor the activity of many neurons at once in real time. Our previous work used powerful microscopes to observe all the neurons in the brain of a zebrafish in its larval stage. However, these observations were slower than the speed of electrical activity in the fish’s brain. Collaborating with other labs, we will develop computational methods to determine the activity of single neurons even when the microscope on its own isn’t powerful enough to view objects as small as single cells. This advance will enhance imaging speed up to tenfold or more, enabling us to conduct previously impossible experiments. Working in zebrafish, for example, we will identify how small areas of the brain involved in initiating movements influence how the entire brain processes feedback from those movements. Our work will provide a suite of new tools for other neuroscientists to use, as well as general insight into how the brain processes information and carries out behavior.

FULL PROJECT

More
Interaction of sensory signals and internal dynamics during decision-making

Imagine deciding whether or not to eat a piece of cake. When making that decision, there are two main factors. First, is the cake appetizing? That is, does the sensory experience of the cake compel you to eat it? Second, are you hungry?

That is, is the internal state of your brain—hungry or not—conducive to eating the cake? Every decision depends upon these two factors—incoming sensory stimuli and internal brain state—yet how they interact in neural circuits remains a mystery. We are working in mice to study how internal states are combined with sensory experience to guide decisions. We have trained mice to make choices about various types of sensory stimuli. To investigate how internal states influence these perceptual decisions, we plan to take advantage of the fact that, even without training, each mouse has an inherent bias to choose one choice over another. This bias will be reflected in the internal state of the brain. So, because of the pre-existing bias, even when the animal is making a decision about the same sensory stimulus, the internal brain state will be different depending on which choice the animal makes. Then, we will record the activity of large populations of neurons. With this technique, we can compare the neuronal activity to the same sensory stimulus during different brain states—that is, when the choice is the same or different from the mouse’s initial bias. By computationally modeling the network of neurons, we will be able to determine how bias—i.e., the internal brain state—affects the activity of the neural network involved in decision-making. Finally, once we have established how brain state changes the activity of the population of neurons, we can use sophisticated genetic techniques to examine which neurons in the population contribute to the changes in brain state. By incorporating information about internal brain states, these experiments will allow a much deeper understanding of decision-making.

FULL PROJECT

More
International Brain Lab

A central goal of neuroscience is to decode the brain activity responsible for decisions and actions. Imagine an animal foraging for food. First, the animal needs to evaluate sensory signals in its environment to judge which foods are currently available.

Then, the animal must decide which of the available food choices will be the most rewarding and make a plan for action. The neural mechanisms supporting that seemingly simple process are frighteningly complex. Until very recently, the field has lacked the tools to read out neural activity at the scale necessary to understand even the simplest choices. A recent explosion in new techniques and accompanying mathematical advances has opened a window into how the brain makes choices. But a serious challenge remains. Harnessing these new tools effectively is beyond the reach of any single laboratory. While individual labs have made significant advances, the piecemeal approach has so far made it difficult for scientists to compare and reproduce each other’s data. The International Brain Lab, a joint effort funded by the Simons Foundation and the Wellcome Trust, will combine the efforts of 20 laboratories worldwide to focus on a single goal: to determine how the brain functions during a simple decision in a mouse. The mouse will be trained to make decisions about visual stimuli while we measure neural activity brain-wide. We will make precise electrical recordings of hundreds of neurons from many brain areas and use sophisticated microscopes to directly observe the brain in action. Leading computational neuroscientists will develop mathematical and computer models of this brain activity. We hope not only to discover how brains support decision-making in any animal, humans included, but also to offer a new large-scale, collaborative model for brain science.

FULL PROJECT

More
Large-Scale Cortical and Thalamic Networks That Guide Oculomotor Decisions

The cerebral cortex—the outermost, wrinkled layer of our brains—contains a variety of brain areas that are connected to form neural networks. Some areas of the cerebral cortex, including two areas known as PFC and PPC, are specialized to be involved in movements to a goal or target.

These movements might be rapid eye movements, known as saccades, or arm reaches. PFC and PPC contain patterns of neural activity that predict, when faced with choices about which movement to make, the impending decision. Other brain areas outside of the cortex, such as the thalamus, are also involved in that choice. Our goal is to dissect the activity of the neural networks formed by PFC, PPC, and the thalamus by building computational models and testing them with experiments. We specifically plan to study the connectivity of the networks in these three brain areas. We have developed wireless technology to record the activity of neurons in many brain areas at once. We will take advantage of this new technology to record neurons while monkeys are trained to select a target with a saccade. This set-up will allow us to examine network processing while the monkey is making decisions. We can then use sophisticated genetic techniques to assess how directly the PFC, PPC, and thalamic networks influence one another, and how information flows through those neural networks. Working in collaboration with theoretical neuroscientists, we will build computational models of these networks to study the interactions among all three brain areas and test the model’s predictions with new experiments. This arrangement allows for a close marriage of theory and experiment, and should provide general insights into how multiple brain areas work together when making decisions.

FULL PROJECT

More
Large-Scale Data and Computational Framework for Circuit Investigation

Decades of research have uncovered fundamental principles about how the eye works. The cornea and the lens act in concert to form an image of the world on the retina, a paper-thin sheet of neural tissue that lines the back of the eye.

The retina is a laminar structure with about 60 distinct types of neurons. The input layer contains specialized photoreceptor cells which sense light. The signals from the photoreceptors are processed by neurons in the other layers, and the processed output is transmitted on the optic nerve to the brain. Because of this processing, the role of the retina in vision is much more complex than that of reporting camera-like images to the brain. However, we lack a unified model that integrates the considerable knowledge about the retinal components and offers a means of calculating the flow of signals from the incident image, through the retina, and onto the optic nerve. We are spearheading an effort to develop such a unified computational model, which will be based on experiments from laboratories around the world, including our own. This model will incorporate existing information about the physiological optics of the eye, as well as key features of retinal neural circuits. By synthesizing and refining our understanding of the eye, this model will provide a foundation for further investigating the human visual system, evaluating approaches to restore eyesight, and other practical applications in science, engineering, and medicine.

FULL PROJECT

More
Leveraging dynamical smoothness to predict motor cortex population activity

Animals often move in response to the senses. A dog follows a scent trail or a field mouse seeks cover after a hawk’s shadow passes overhead. Neuroscientists studying the sensory system have made great strides decoding how the brain translates sights and smells into neural activity.

They give animals a specific stimulus — a pattern of stripes, say, or a specific chemical odor — and simultaneously measure the brain’s ‘code’ for that stimulus: i.e., how neural activity depends upon the stimulus. However, this approach has largely failed for the brain’s output side, the motor system. Despite decades of work, neuroscientists still struggle to describe how neural activity in the brain relates to the movements being generated. The number of neurons in the brain’s motor areas vastly exceeds the number of muscles, meaning that many different patterns of neural activity could be responsible for the same movement. This makes it potentially impossible to find a unique, simple code describing neural activity in terms of movement. Our goal is to turn this conundrum into an opportunity. Of all the patterns of neural activity that could produce a given movement, the brain must pick one, and this ‘choice’ may be illuminating regarding the basic principles at play. Using a combination of experiments and computer modeling, we aim to describe the choices made by the brain’s movement-generating system, and to decipher the principles behind those choices. We will train monkeys to use their arms to control a pedal-like device and navigate a virtual world for juice reward. At the same time, we will record the activity of neurons in the motor cortex and muscles in the arms. Pedaling movements have predictable properties — they are smooth and occur in cycles of varying speeds. Yet pedaling is driven by complex patterns of muscle activity that the brain must precisely create and control to ensure smooth movement. By comparing muscle activity to neural activity across many such movements, we can employ mathematics and computer optimizations to ask why the brain ‘chooses’ the observed patterns of neural activity. Neural network models can then be used to ask whether and why those choices are ‘good’ choices — i.e., do they allow the motor system to function better under challenging circumstances? Results will be compared between monkeys and mice. Indeed, we anticipate our approach will offer general insights into motor systems in any animal, including humans.

FULL PROJECT

More
Mechanisms of Context-Dependent Neural Integration and Short-Term Memory

The way in which neurons process information determines how the brain functions. For example, in order to make a decision, neurons accumulate evidence towards different, competing alternatives, eventually allowing an animal to make one choice over another.

These accumulating neurons are termed “neural integrators.” Neural integrators can be thought of as short-term memory systems that store a running total of the inputs they have received. While easiest to understand in terms of decision-making, neural integrators show up in a variety of other brain processes. For instance, in motor control and navigation, when signals that contain information about the velocity of movements are integrated, they become signals that contain information about body position. In either of these cases—decision-making or motor control—the accumulation and storage of information can be context-specific. That means that different information is accumulated depending on the conditions under which a task is performed; in other words, the task’s context. We will investigate such context-specific accumulation and storage of information in a well-studied system in the brain: the oculomotor system that controls eye movements. This system contains a neural integrator circuit that converts eye velocity signals into eye position signals. This conversion is context-specific: the circuit maintains distinct patterns of neural activity depending on whether a given eye position has been reached through a sudden, rapid eye movement or a slower, tracking eye movement. We will develop a computational model of the oculomotor neural integrator system in larval zebra fish, and collaborate with experimentalists to incorporate data from the activity of every neuron in this system. We will test our model predictions using sophisticated genetic techniques to manipulate the activity of neurons to determine the core features of the process of neural integration in different contexts. This work promises to reveal ubiquitous mechanisms by which neurons accumulate information and store it in short-term memory, applicable not only to motor systems, but also to higher-level cognitive processes such as decision-making.

FULL PROJECT

More
Modulating the dynamics of human-level neuronal object representations

How does the brain give rise to the mind? In other words, how does neural activity give rise to thoughts and perceptions? This is one of the deepest questions in modern neuroscience, and we propose to approach it by studying how monkeys perceive visual objects.

Such “visual object recognition” is a critical step to understanding the environment, and it underlies such diverse cognitive processes such as judgment, planning, and decision-making. Yet, how the brain’s neural circuits work together to decipher complex visual scenes remains a mystery. To tackle this challenging problem, we propose to record the electrical activity of hundreds of neurons across multiple brain areas that underlie visual object recognition in monkeys. We will also develop a computational model that links neural activity in these brain regions to the visual object perceived by the monkey. Our model will be able to predict the perception of the monkey from the measured neural activity. Because monkeys have very similar visual systems to humans, we expect that this work will directly inform how the human brain is solving these same problems.

FULL PROJECT

More
Multi-regional neuronal dynamics of memory-guided flexible behavior

Our brains are able to store information about the world for several seconds after it was acquired, for example the shape and identity of an animal that disappeared behind a tree. Such ‘short-term memory’ is crucial for reasoning and decision-making, providing the link between past events and future behavior.

Despite its fundamental importance, how the brain represents and uses short-term memory is not fully understood. One of the challenges is that neural activity representing short-term memory is dispersed across different parts of the brain. Recent technological advances now allow us to measure activity from different regions simultaneously. We propose to study two different behaviors in mice that involve short-term memory. In the first task, a mouse will hear a sound and, after a delay, make a movement to report the sound it heard. The mouse has to maintain information related to the sound during the delay before indicating its decision. In the second task, we will present the mouse with a sound and then deliver another cue that will tell the mouse which actions to choose, contingent on the sound. Here the mouse has to retrieve past sensory information to make a current decision. We will use Neuropixels recording probes and a mesoscale microscope to record activity from hundreds of neurons in the brain. We will sample activity across approximately 50 brain areas, selected based on connectivity data. This data will amount to a map of neural activity underlying short-term memory, decision-making and movement initiation. The activity map will guide the development of computer models of the brain, allowing us to understand general rules of memory-guided decisions.

FULL PROJECT

More
Network mechanisms for correcting error in high-order cortical regions

Unaided by modern technology, our very survival would depend upon our brain’s ability to accurately map external space. Returning home or to a safe haven, for example, requires that we identify our location in space, remember the way back, and navigate there.

For over a century, scientists have asked how the brain represents and remembers our external environment. Only in the last few decades, however, have researchers discovered the basic building blocks of an internal, neural navigation system. This system depends upon neurons in a brain region called the medial entorhinal cortex, which translate the external environment into an internal map of space. In the medial entorhinal cortex, neurons called “grid cells” provide the basis of this internal map of space. A given grid cell increases its activity when an animal or human moves through particular locations in space. For example, if a mouse runs in a straight line, a grid cell will increase its activity every 30 centimeters, relaying to the mouse how far it has run through space. Grid cells, therefore, generate a representation of space similar to a longitude and latitude coordinate system. While originally discovered in rodents, grid cells have now been found in a range of species, from bats to humans, suggesting they are fundamental to how brain remembers location and navigates the animal through space. However, despite the ubiquity of grid cells, how the brain actually creates neurons that respond this way is largely unknown. Our team will address this gap in knowledge by aiming to identify how sensory information can help the spatial maps created by grid cells remain stable over time. Taking an interdisciplinary approach, we will record the electrical activity of neurons in the medial entorhinal cortex of mice while they navigate through space. In addition, we will develop a computational model to investigate the exact way in which these neurons act to generate their external representations of space. Our work will more broadly address how neural activity in the medial entorhinal cortex gives rise to spatial memory and navigation not just in mice, but in other animals, including humans, as well.

FULL PROJECT

More
Network properties and plasticity in high- and low-gain cortical states

Neural networks in the brain have their own intrinsic activity independent of incoming sensory stimuli. This intrinsic activity is known as the global brain state, and the brain’s state will affect the processing of incoming sensory stimuli.

For example, what we notice in the world is a function of what we pay attention to. Another way to say this is that the global brain state of attention determines what sensory stimuli we process. But exactly how, at the level of neural circuits, do global brain states influence sensory processing? Working in the visual system of mice, our lab has made significant progress towards answering this question. We have found that locomotion puts the visual cortex of mice into a so-called “high gain” state, which renders it more sensitive to visual stimuli. In this state, visual experience can cause long-lasting changes in the adult brain. We will allow mice to walk through a simple virtual reality environment in which they learn a foraging task, associating certain visual objects with a reward. When the mouse virtually passes one of these objects, he is trained to pause and lick. We Will study the mouse under two conditions: when the mouse is moving through the virtual environment on its own, or when the mouse is still and the virtual objects move past the mouse. In both cases, the mouse has to lick the passing objects for the reward. But in the two cases, even though the sensory stimuli remain the same, the brain state is different: the mouse is either moving or not. We will then implant electrodes that allow us to monitor the activity of multiple neurons at the same time. We hypothesize that the mouse will be better at the foraging task when moving because locomotion puts the brain in high gain state. We will then use sophisticated genetic techniques to manipulate neural activity and mathematical analyses to determine which features of activity in the high gain state are responsible for the increased learning. Evidence from humans shows that global brain state can affect perceptual learning. Our results, therefore, will shed light not only on how movement-induced brain states influences visual perception, but also on how global brain states affect cognitive processing more generally.

FULL PROJECT

More
Neural Circuit Dynamics During Cognition

How does the activity of neurons give rise to cognitive phenomena such as short-term memory or decision-making? To address this question, we will take advantage of an experimental setup we have perfected over the course of a decade.

In this setup, a rodent is allowed to navigate through a virtual reality world while we record the activity of many neurons in its brain at the same time. We can also use sophisticated genetic techniques to perturb the activity of neurons in the brain, allowing us to test models in a causal manner. The animals will perform such tasks as making decisions based on memories. We will analyze the data in collaboration with theoretical labs, which will bring statistical and mathematical expertise to the table. Because tasks such as making decisions based on memory cannot be explained by simple responses to sensory stimuli alone, we will be studying how the intrinsic activity of the brain—i.e., the brain state—affects choices. Because of the similarity of mammalian brains, we expect these insights to apply to humans as well.

FULL PROJECT

More
Neural Circuit Dynamics Underlying Sequence and Variability

Some of our most complex behavior, such as speaking, swimming or playing the piano, can be understood by breaking the behavior down into a sequential succession of learning steps. This type of sequential learning is common across the animal kingdom — just as humans learn to speak, for example, songbirds learn to sing.

Decades of research have shown that many of the same neural processes can explain both behaviors. Birdsong, like human speech and other behaviors, is not innate but learned by imitating the behavior of parents and other adults. Scientists have identified neural circuits in zebra finches that drive song learning and have shown that the process requires the precise coordination of multiple regions spread across the brain. However, many questions remain about how these neural circuits develop throughout the bird’s lifetime. Previously, our lab demonstrated that a brain region called the HVC is involved in the timing of birdsong. Neurons in this brain region are individually active only at specific points in the song, but together form continuous sequences of activity that drive song production. We plan to observe the activity of neurons in the HVC and in auditory regions while juvenile birds are taught to sing by older adults, attempt the songs themselves and sleep. We’ll use this data to test the theory that sleep provides an essential stage of song learning — we hypothesize that the auditory regions of the brain replay the song while the birds sleep, activating the HVC to facilitate learning. Birdsong learning shares another remarkable feature with human speech learning. Juvenile finches babble just as human infants do — in birds, this is termed ‘subsong.’ We plan to investigate the brain areas, especially one called the LMAN, that drive subsong and provide the necessary variability for young birds to mix and match different sounds while learning to sing. We will collaborate with theoretical neuroscientists to develop and test models of how the brain forms sequences in HVC and generates the neural variability in LMAN. These advances will shed light not only on how birds learn to sing, but also possibly on how infants learn to speak, or how humans learn to perform any complex behavior based on sensory feedback, from sports to music.

FULL PROJECT

More
Neural coding and dynamics for short-term memory

The brain processes sensory stimuli and uses this information in conjunction with goals and memories to generate actions. All these steps involve the representation of information and brain states in changing patterns of neural activity, or "neural codes".

We seek to understand which codes the brain uses to represent information; to characterize the advantages of those codes with respect to efficiency, ease of readout, and so on, compared to other potential coding strategies; to learn about the circuit architecture between neurons that allows the brain to possess such codes; and finally, to understand how the brain uses these codes to compute and generate actions well-suited for survival in the world. Our group develops numerical and analytical tools to help "crack the neural code" and model how it might be substantiated in real neural circuits. The project proposed here involves topics as abstract as the mathematical analysis of neural codes, to data-driven projects involving the analysis of neural data which seek to test models of how neural circuits are work. On the abstract end, we are focused on questions of coding efficiency, such as how the maximum amount of information that can be stored over time in various neural codes depends on inherent variability in neural activity and the size of the circuit. On the data-driven end, we plan to collaborate with laboratories studying a set of brain areas involved in spatial learning and memory in rodents, the hippocampus and the entorhinal cortex. Our work on both coding and circuit mechanisms for this project will be focused on "grid cells", which represent animal location in space in an unusual way, and are hypothesized to be involved in navigational computations that allow mammals to estimate where they are even as they move about. We will make realistic models of the neural circuits that give rise to grid cells. We will model how grid cells interact with the hippocampus to build maps of the world and enable navigation within it. Because the challenges of spatial inference are common to rodents and humans, and because the neural representations are similar, we expect our insights to shed light on the representations and mechanisms that underlie spatial memory and navigation in humans.

FULL PROJECT

More
Neural computation of innate defensive behavioral decisions

In general, our actions are not mere reflexes in response to external stimuli. Most of the time, external stimuli interact with internal factors—such as our emotions, or memories—to influence how we act. These actions, in turn, affect both what new stimuli we receive and also change our internal brain state. Together, these factors make for the complexities of behavior. This interplay between external stimuli and internal states is the focus of research in our laboratory.

We will study this interplay in mice, using so-called “approach and avoidance” behaviors. An approach behavior might include interactions with other mice, such as mating or fighting. An avoidance behavior might include freezing or fleeing in the face of a predator. Both behaviors depend on external stimuli—attack behavior, for example, may be triggered by seeing another mouse—but also internal factors such as the animal’s state of arousal. We will focus on attack behaviors. To study those behaviors, we will monitor the activity of many neurons simultaneously in brain areas responsible for vision and in those responsible for the mouse’s emotional state. With modern tools, these measurements can be taken even while the mouse performs the attack behavior. From our observations, we will develop computational models of the mouse’s behavior. To test these models, we can use sophisticated genetic techniques to manipulate or eliminate the activity of neurons and observe the effect on the mouse’s behavior. Because similar behaviors in humans engage emotions—via a part of the brain called the limbic system—an understanding of the neural circuits under study here will be highly relevant to psychiatric disorders such as PTSD, phobias, autism, or schizophrenia.

FULL PROJECT

More
Neural computations for visual form processing and form-based cognition

When deciding whether to eat an apple, you might focus on its color, firmness and smell. If you’re about to throw it to a friend, you might note instead, perhaps unconsciously, its size and shape. This simple example illustrates a fundamental principle: we can use the same visual input to guide a variety of decisions and actions.

How the brain achieves this flexibility is unknown. We will try to answer this question by studying the part of the cortical visual system responsible for pattern recognition — the ‘ventral pathway’ — which serially elaborates the visual information needed to identify objects, and formats that information so that other parts of the brain can use it to guide decisions and actions. Each member of our team has separately helped to discover how different parts of the ventral pathway contribute to vision. Now we will team up to ask in broader terms how it works from end to end, and how it contributes to the performance of complex visual tasks. Working in monkeys — whose visual systems are very like our own — we will record the activity of neurons from many areas in the ventral pathway. In one family of experiments, we will use a consistent and comprehensive approach to study how the visual areas of the ventral pathway create visual representations that allow us to recognize and categorize the contents of our visual world. In a second family of experiments, we will ask how the information in these representations is used to support flexible behavior. For example, at one moment monkeys might be instructed to choose between different objects, say an apple and a banana. But in the next moment, the monkey might choose between different kinds of objects, say fruits (apple or banana) and vegetables. Thus the same visual stimuli can guide two different decisions depending on the situation, and our recordings will reveal the neural computations that underlie this capacity. While our ambitious experimental, model-building and theoretical endeavor is beyond the capacity of any one of us, we will be able to achieve it through the combined, collaborative effort of our team.

FULL PROJECT

More
Neural Dynamics for a Cognitive Map in the Macaque Brain

The ability of the human brain to form a complex memory of an event experienced just a single time is astounding—and that memory can be stored and retrieved for decades. We sometimes casually refer to memory retrieval as “visualizing” the past—often with precise spatial and temporal detail intact.

But how is such a mental map created in the brain? Research suggests that some of the brain regions responsible for forming and storing memory of past events, such as the hippocampus and the entorhinal cortex, are the same ones responsible for spatial navigation, implying a deep connection between the two processes. However, how these processes are related, especially in primates, is largely unknown. Building on work in rodents showing that brain regions involved in memory are activated when the animals experience a virtual reality setting, in our experiments we will extend these results to primates. We will train monkeys to use a joystick to navigate virtual environments. Simultaneously, we will use several electronic brain implants that can measure the electrical activity of over a hundred neurons in the monkey’s hippocampus and entorhinal cortex. By analyzing this large set of neural data with sophisticated mathematical techniques, we can determine how these neurons represent the virtual environment. For example, it is known, in rodents, that certain neurons increase their activity when the animal traverses a particular, absolute point in virtual space. That is, the neuron will become excited when the animal visits the corner of the room, but not the middle. Other neurons will be active when the animal visits the middle of the room, but not the corner. In primate, how these neurons operate is less clear, but straightforward to study in a virtual reality environment. By working our the details of spatial navigation in primate memory brain regions, our work will lay the foundation for understanding how these mechanisms underlie the formation of complex memories, not only in monkeys, but in humans as well.

FULL PROJECT

More
Neural Dynamics of a Multi-timescale Social Behavior

The brain of any animal has to operate on multiple timescales at once. In the long term, it has to evaluate the trade-offs between feeding, mating or conserving energy. In the medium term, the animal uses those calculations to create a plan — a hungry worm, for example, might choose to feed where it is rather than to go off in search of better food or mates.

In the near term, animals have to make immediate decisions about where to go, such as turning left or right or continuing on the current path. Studying how animals plan over these different timescales has been challenging, in large part because it requires many brain areas operating at once, and neuroscientists have lacked the tools to observe neural activity over broad swaths of the brain. To overcome this technical limitation, we will study the mating behavior of a simple animal, the tiny roundworm C. elegans. Roundworms find a mate by launching a systematic search of the environment, integrating scent and tactile information. Once the worm has found a potential mate, it begins a rudimentary sequence of actions that include turning, forward and backward movement, making contact, and finally copulation. The animal’s behavior is modified by long-term factors, such as natural cycles of activity and rest, whether its hungry or full, and the quality of food in the environment. We have developed a custom microscope to track the activity of almost every neuron in the roundworm’s head — here, we will perform the first whole-brain recordings in any animal during social interactions. Using this data, we will map the neural activity underlying long-term, internal states, such as hunger, and examine how that influences neural activity linked to medium-term behaviors, such as mating. We will combine neural activity and behavioral data with knowledge of the worm’s neural wiring to construct computer and mathematical models, from which we will learn how behavior is generated by neuronal activity, and how neuronal activity arises from neuronal network architectures. Because the wiring differs between the two sexes, we can also observe how sex-specific versus non-sex-specific behaviors arise in two variants of a nervous system. Our work in the lowly roundworm will shed light on how the nervous system processes information over multiple timescales with relevance for larger animals, including humans.

FULL PROJECT

More
Neural encoding and decoding of policy uncertainty in the frontal cortex

Imagine an autumn day. When you leave home, you need to decide which type of jacket to wear. Looking at the sky or checking the weather forecast can help you take a guess at the weather, but this prediction is subject to uncertainty.

The uncertainty in your prediction translates into uncertainty in your decision of which jacket to wear. How does the brain deal with such uncertainty when making decisions? We plan to investigate this question on two fronts. First, we will ask how brain areas responsible for selecting actions encode uncertainty about the outcome of a decision. Second, we will ask how uncertainty is decoded to form estimates of the outcomes of possible decisions. To investigate the first question, we will record the electrical activity of neurons in a brain area called the premotor cortex, working in mice. The premotor cortex has been implicated in the selection of actions. Because decisions with uncertainty involve probabilities, we will focus on evidence for how groups of neurons encode outcomes probabilistically. To investigate the second question—how uncertainty is decoded in the brain—we will record from a brain area known as the orbitofrontal cortex, which is densely connected to the premotor cortex. We believe that the uncertainty-induced variability in neuronal responses will decrease as the mouse learns to improve its estimates. To help analyze the data we collect, we will team up with a theoretical neuroscience lab that specializes in probabilistic approaches to neural coding. Our investigations will lay the groundwork for future studies that incorporate uncertainty into mathematical analyses and models. Because estimating uncertainty is a basic fact of life, our results will pave the way to understanding higher cognitive processes that are based on unreliable data, such as making a decision based on a weather forecast.

FULL PROJECT

More
Neural mechanisms of context dependent cognitive behavior

Context matters when making a decision. For example, we answer a ringing phone if we are in our own office but not in a colleague’s. This collaborative project studies how the brain learns about different contexts — ‘my office’ and ‘not my office’, for example — and how it uses context representations to guide decisions.

We will address these questions by training macaque monkeys to integrate sensory and contextual factors in well-controlled behavioral tasks. The monkeys will learn to follow specific rules based on the experimental context. We will record from different brain areas, including the prefrontal cortex, posterior parietal cortex, amygdala and hippocampus, as the animals perform the task. We hypothesize that the prefrontal cortex and amygdala will contain neural representations of the context in each task, while the hippocampus will play an important role in learning these contexts. We will test our hypothesis by applying different mathematical techniques, including dimensionality reduction, state space analyses, and neural network modeling, on our behavioral and neural datasets. These techniques enable us to extract context-related neural response patterns from the activity of hundreds of neurons within and across the recorded regions. Our results will offer insights into how context-based decisions are made in the primate brain.

FULL PROJECT

More
Neural substrates of behavior and action selection in model organisms

Our group is helping Rafael Yuste’s lab to pioneer the use of Hydra vulgaris, a small, freshwater polyp that attaches itself to underwater surfaces in lakes, rivers and ponds and uses its tentacles to capture prey, as a model organism for studying neural function. Hydra has a simple nervous system, a network of about 600 to 2,000 neurons, depending on the animal’s size.

(The human brain, in contrast, houses nearly 100 billion neurons). Despite its simple anatomy and tiny nervous system, Hydra boasts a dozen different behaviors: It can expand and contract, move its tentacles, and even somersault across surfaces. Until recently, scientists knew little about Hydra’s nervous system and had few technical tools with which to study it. We will develop techniques to track the activity of every neuron and muscle in Hydra, even when it is active (a challenge in Hydra, which can dramatically deform its soft body). We can disrupt or manipulate the activity of neurons and observe how that affects Hydra’s behavior. A detailed map, or connectome, of Hydra’s nervous system is in development, and we will take advantage of these advances to build near-complete computer models of its nervous system’s structure and function. How do distributed networks of neurons control behavior in a concerted manner? How do neurons sync up to control these behaviors? What types of connections are necessary for this synchronization? What does the ongoing activity in the nervous system mean even when the animal isn’t moving? Complete access to Hydra’s neurons will allow us to make headway on these questions. In addition, we will study another simple organism — the fruit fly — to elucidate how the entire brain coordinates behavior in the face of an ever-changing environment. Both Hydra and the fruit fly share a surprising number of basic neural building blocks with all animals, so we expect our insights will apply to any nervous system, humans included.

FULL PROJECT

More
Outer Brain and Inner Brain: Computational Principles and Interactions

Loosely speaking, we all have two brains. Our ‘outer brain’ interacts with the surroundings via the senses or translates intentions into movements. Our ‘inner brain’ receives sensory information from the outer brain, integrates it with memories, internal states — such as hunger or arousal — and goals, then sends a plan of action back out to our muscles.

Each brain appears to process information distinctly, a phenomenon observed from flies to humans. Even devices using artificial intelligence, such as self-driving cars, have different ways of processing ‘outer’ and ‘inner’ data. That they are different makes intuitive sense. The outer brain has to deal with enormous amounts of data. Our eyes, for example, take in 1 gigabyte per second of raw image information, but we ultimately only use a fraction of that data. The inner brain, on the other hand, processes information at a far lower rate. Humans can speak, play piano or remember things at about 20 bits per second, meaning we can choose among a million possible thoughts per second. This is a lot, but pales in comparison to the information-processing task faced by the outer brain. While the outer brain excels at filtering relevant information, the inner brain excels at integrating information from the senses, our past experience and our internal state to decide and initiate an action. Traditionally, neuroscientists have studied one system or the other — inner or outer. We have assembled a team of five neuroscientists, each an expert in one region of the inner or outer brain, to bridge the gap. By performing related experiments in parallel and leveraging our combined expertise, we can examine how the inner and outer brains communicate with each other. We will explore whether the systems really use different processing strategies or if there are alternative explanations that do not require this dichotomy. Our work will investigate these questions in mice and humans and build computer models to test different theories. With a collaborative approach, we will determine the principles underlying the different brain systems, providing insight into human behavior and potentially enhancing artificial intelligence.

FULL PROJECT

More
Plasticity of global brain dynamics: tunable neural integration

In the last few years, powerful experimental techniques that can monitor the activity of many neurons at once have generated a new field of neuroscience — ‘neural dynamics’ — which uses sophisticated mathematical models to describe how brain activity evolves on a moment-to-moment basis.

Research on how neural dynamics supports various computations has generally been conducted in a stable environment. We will study how the brain adapts, or ‘tunes,’ its dynamics to a changing environment. This distinction is crucial: If the brain cannot learn to alter its computations based on new or unexpected information, the animal is unlikely to survive. We will study how the neural dynamics controlling simple eye-movement behaviors in zebra fish and mice can be adaptively tuned to improve visual perception. Clear vision depends on the ability to hold the eyes stably on a given location when studying a feature of a stationary object or to move the eyes to accurately track moving objects. The ability to accurately hold the eyes at a given position is performed by a brain circuit called the ‘oculomotor integrator.’ We will study how the dynamics of this circuit are tuned by creating a virtual environment that simulates what an animal would see if, due to an improperly tuned oculomotor integrator, it could not accurately control the position of its eyes. We then will monitor and manipulate neurons throughout the eye-movement control regions to see how these networks adjust, or re-tune, their neural dynamics to adapt to the new environment. Finally, we will combine our data with detailed descriptions of the brain’s wiring and apply new mathematical techniques to analyze how the observed neural signals are processed and transformed by these brain circuits. By analyzing and comparing these two organisms, we can make general theories about how the brain adapts its activity to modify actions.

FULL PROJECT

More
Population dynamics across pairs of cortical areas in learning and behavior

The activity of neurons in the brain represents external sensory stimuli, internal cognitive states, and plans for upcoming motor behavior. Neurons communicate with each other through electrical impulses known as spikes.

By measuring spikes from small groups of neurons, researchers have made tremendous progress toward building sophisticated, data-driven models of brain function. However, technical limitations have so far prevented this approach from scaling to hundreds, thousands, or millions of neurons acting together. We have proposed a new collaboration between experimental and theoretical laboratories to develop models of brain function based on data from hundreds of neurons recorded in multiple brain areas simultaneously. We will investigate how learning changes the neuronal representation of sensory and decision-related information in cortex. At the core of our work is a newly developed imaging technology that allows for recording the spiking activity of hundreds of neurons at once. We have also developed powerful new statistical tools to help us analyze this data.

Working in the visual system of mice, we will address several key questions: How is visual motion encoded in multiple brain areas during decision-making? How is ongoing activity related to how the brain encodes sensory and decision-related information? How does this activity change over time during learning? Our research will shed light on how the brain encodes sensory information during decision making, and on how that encoding changes as we learn.

FULL PROJECT

More
Probing recurrent dynamics in prefrontal cortex

We all know that our thoughts change from moment to moment. In other words, it is rare for our brain to dwell in the same state for more than a fraction of a second. Yet, because of technical limitations, much of cognitive neuroscience has arguably studied “snapshots” of brain activity.

As a consequence, we have accumulated a wealth of knowledge about what the responses of brain areas or even individual neurons represent—for example, some neurons seem to respond selectively to faces—but we mostly lack an understanding of how these neural responses relate and contribute to the dynamic computations performed by large networks of neurons. In this project we attempt to address this shortcoming. Exploiting recent technical advances to record the activity of large numbers of neurons at once, we will move beyond the “snapshot” view of the brain to observe how the brain dynamically changes from one state to the next.

Working in monkeys, we will characterize the dynamic responses of the prefrontal cortex, a brain area largely unique to primates that underlies our ability to flexibly choose among possible actions under ever-changing circumstances. The monkeys will be trained to associate a sensory input—such as a visual stimulus—with a motor output—such as a rapid eye movement to a target. The monkey will have to make different associations based on the context of the situation, the rules of which the monkey will have to learn. Our goal is to develop a model of the neural networks involved in so-called context-dependent computations, and compare this model’s predictions to the experimentally obtained data. We can also take advantage of sophisticated genetic techniques to perturb the activity of neurons in the prefrontal cortex, and compare the effects of those perturbations on the monkey’s actions to the effects of simulated perturbations on the computational model. This close marriage of theory and experiments will provide a deeper understanding of the function of the prefrontal cortex, establishing a conceptual framework to explain how cognition and behavior emerge from the dynamic behavior of neural networks.

FULL PROJECT

More
Relating dynamic cognitive variables to neural population activity

Millions of neurons work together to produce perceptions, thoughts, and actions, but exactly how this happens remains a mystery. Recent advances in technology that can record the activity of many neurons at once have led to breakthroughs in understanding how populations of neurons behave. However, relating so-called population behavior to cognitive processes such as decision-making has proved a challenge.

We have made significant progress toward bridging this gap by developing a new method for tracking an important aspect of decision-making. In our experimental approach, subjects are engaged in a two-alternative decision-making task. They are presented with randomly-timed pulses of evidence in favor of one alternative or another, then, at the end of the task, each subject must decide which of the two alternatives had the greater number of pulses. We found that both rodent and human subjects used a decision-making strategy called “accumulation of evidence,” which tracks how sensory evidence accumulates for or against different choices during the formation of a decision. Having identified the decision-making strategy, we next plan to incorporate simultaneous electrode recordings of many neurons to relate population activity to the accumulation of evidence in decisions. Because such an accumulation of evidence strategy is thought to be a core component of perceptual, social, and even economic decisions, our method provides a unique opportunity to relate neural activity to a wide variety of cognitive processes.

FULL PROJECT

More
Remapping across time, space and region

We take it for granted that we can distinguish similar events yet still group them together as being similar. Two concerts, for example, offer similar sensory experiences: They take place in large rooms with many people, typically with dimmed lights and loud music.

But there are also many differences — the design of the space, the identities of the crowd, and the quality of the music. Our brains face the challenging task of learning to group experiences by similarities — they were both ‘concerts’ — yet still distinguish them as two different events. One way that neural circuits might accomplish is this is via remapping, where the same neuron that encodes a memory, might, at a different time, encode location, time or sound. To gain insight into how the same neurons could encode multiple things, we will record brain activity in rodents and monkeys as they navigate different landscapes, such as an open field where they forage for food or a maze with multiple decision points. We will use novel recording technology capable of recording from the same neurons over time to determine how individual neurons shift their activity based on the environment and the task — in essence, the phenomenon of remapping. For example, the activity of the same neuron might reflect the rat’s heading direction when moving slowly, but its speed when moving quickly. In this case, the neuron has been remapped from direction to speed. We believe a similar process underlies how one neuron can be remapped from one memory to another. With this data, we can reconstruct the behavior of large neural circuits in brain areas involved in memory and navigation. Our group will collaborate with several other experimental and computational labs to develop the techniques and mathematical tools to gain insight into this fundamental brain process.

FULL PROJECT

More
Searching for universality in the state space of neural networks

So-called “collective behavior” surrounds us. For example, the solidity of an object is the result of forces that act between neighboring atoms, on a scale nearly one billion times smaller than the object itself. Decades of work in theoretical physics has provided precise mathematical theories that explain how macroscopic phenomena such as solidity emerge from the microscopic interactions among atoms—i.e., from the collective behavior of those atoms.

Just as matter is composed of individual atomic units, the brain is composed of individual cellular units, termed neurons. Is it possible that perceptions, thoughts, memories, and actions can be described by the collective behavior of neurons? Could mathematical ideas from physics be applied to the brain’s biology? We are taking advantage of new experimental technology for simultaneously monitoring the activity of many neurons to test these very ideas. Starting with the raw data from neurons, we will borrow strategies from physics—such as the construction of thermodynamic variables from the statistical mechanics of atoms—to build descriptions of neural networks. Preliminary analysis from both the retina and a brain region called the hippocampus suggest that such constructions are feasible. This sets the stage for discovering universal principles governing the collective behavior of neurons, much as work in physics has described universal principles governing inanimate systems.

FULL PROJECT

More
Single-trial Visual Familiarity Memory Acquisition

We've all experienced a sense of familiarity—that vague sense that we’ve met someone, been in a room, or seen a movie before—even if we can’t quite remember the specifics. Recent experimental work has shown that we can pick out familiar images out of tens of thousands, even after only seeing the image one time, and for just a few seconds.

This remarkable ability to remember whether we have previously encountered specific people, objects or scenes is called “visual familiarity memory.” Impairments in visual familiarity memory, including those that accompany disorders such as dementia, lead to devastating inabilities to recognize one’s own family or home. Unfortunately, we currently have only a rudimentary understanding of the processing in the brain responsible for this type of memory. One reason for this is that previous studies were only able to look at one or just a few neurons at a time, and then averaged their activity in response to repeated presentations of a stimulus like a visual scene. However, for visual familiarity memory, which by definition can be formed with just a single exposure, averaging across repeated trials is not the best way to study it. New developments in measuring the activity of neurons allow us, for the first time, to observe many neurons at once as these memories are formed and retrieved.

We will examine multiple brain areas, such as the inferotemporal cortex, which is know to be involved in objet recognition, as well as the brain region that provides its input, the visual area V4, which processes more basic visual features. By studying the interactions between these two brain regions, we will determine exactly how familiarity is established. By elucidating these mechanisms, we can better understand some of the fundamental properties of our remarkable memories, and use these insights to unravel how memories break down in disease.

FULL PROJECT

More
Spatiotemporal structure of neural population dynamics in the motor system

Consider the simple act of reaching for a cup of coffee. The brain must decide to pick up the coffee, prepare for movement, and then execute the movement. Each stage of the process requires that neurons in the brain exhibit different activity.

Experiments have revealed that a part of the brain involved in movement, the motor cortex, is active both during the planning phase of movement and during the execution of the movement itself. But how can the same brain area—the same set of neurons—be responsible for two distinct phases? To answer this question, we must understand the patterns of activity exhibited by a set of neurons across time, a phenomenon termed “internal dynamics.” When we switch from planning to executing a movement, the internal dynamics of the same set of neurons change. This allows for the same neural network to control two different types of processes.

Working in the motor cortex, we will use recording technology that allows us to observe the activity of many neurons simultaneously. This way, we hope to address how the motor cortex changes its internal dynamics from the planning to the movement phase. Our recent results have allowed us to characterize mathematically such changes in internal dynamics. This mathematical characterization allows us to identify and study key transitions between brain states. Our research seeks to produce a better understanding of how the brain produces movement and how it transitions from one state to another more generally.

FULL PROJECT

More
The latent dynamical structure of mental activity

The human brain is composed of billions of neurons. Each neuron is capable of complicated computational feats, yet the brain as a whole is able to carry out vastly more complex functions than could any individual neuron alone.

This remarkable computational power of the brain hinges on the fine coordination among neurons. However, though we understand a great deal about how an individual neuron functions, we have little knowledge as to how their collective activity in neural circuits gives rise to computations in the brain. Much of what we have learned about the brain has come from experiments that record the electrical activity of one neuron at a time, then look at the average profile of the response of that neuron to a stimulus or a task. However, in the brain, there is no “average” response. Rather, the collective activity of populations of neurons reflects the unique response of the brain each time a stimulus is experienced or a task performed. Now, new technical advances have allowed us to monitor the activity of hundreds or thousands of neurons simultaneously. We will take advantage of this technology to develop algorithms and data-analytic tools to discover how neurons are coordinated in the brain, and how that coordination effects the brain’s computations. These tools will be developed and applied in close collaboration with experimental laboratories, studying a range of brain functions spanning the arc of perception to cognition and action. By identifying how the concerted action of neurons gives rise to these processes (instead of recording one neuron at a time), we will be able to see—as the brain would—how those computations unfold moment by moment.

FULL PROJECT

More
The neural basis of Bayesian sensorimotor integration

All nervous systems must operate under conditions of uncertainty. It is thought that one way the brain deals with this fact is to create prior expectations of the world based on experience, and integrate those prior expectations with incoming sensory input to decide and take an action. Combining prior expectations with incoming information is known in the field as Bayesian integration.

How neural circuits perform such Bayesian calculations, however, is unknown. Working in monkeys, we will combine a complex task with state-of-the-art technology to record from multiple neurons simultaneously and mathematical tools to uncover the principles by which neural circuits perform Bayesian integration. The task is as follows. The monkeys will observe two flashes of light, and then asked to reproduce the interval between flashes with a tap of the finger or a quick movement of the eye. The Bayesian calculation here is to integrate the memory of the interval (the prior expectation) with sensory information (the elapsed time between flashes). We will then use electrodes to record from brain areas thought to be involved in this task and analyze the activity of those neurons using sophisticated mathematical tools. With this approach, we can make specific predictions about how the neurons should behave based on our mathematical models, and test those predictions with experiments. More generally, our results will shed light on how animals integrate internal and external information to generate flexible, goal-directed actions.

FULL PROJECT

More
The neural substrates of memories and decisions

The neural substrates of memories and decisions. The abilities to learn, remember, plan, and decide are central to who we are. These abilities require that memories be stored, retrieved, and then used to guide decision-making. Current theories suggest that memories are stored when specific patterns of neural activity cause changes in the connections among neurons, and memories are retrieved when these patterns are reinstated.

While we have some idea of the brain regions required for these processes, we lack a deep understanding of how memories are created, remembered, and used. Our team and others have found a phenomenon—termed sharp-wave ripple events—that contributes to memory retrieval and memory-guided planning. Studying this phenomenon in rats, we have already developed advanced decoding algorithms that can determine the content—i.e., the memory being retrieved—from different patterns of neural activity. But to truly understand memory retrieval, we need to move beyond simple observation to the manipulation of neural circuits. To do this, we will combine new real-time decoding algorithms with technology for monitoring and manipulating the activity of many neurons at once. Because we can decode which memory is being retrieved, we will be able to selectively disrupt retrieval of that memory while leaving other brain processes unaffected. These experiments will make it possible to understand how the disruption of specific memory events affects the processing of those events in brain regions and thereby alters the animal’s behavior.

FULL PROJECT

More
The Representation of Internal State in the Fly Brain

Our interaction with the world is influenced by our internal state. A state of hunger makes us sensitive to images and smells of food that we might ignore right after a meal. More generally, the same sensory stimulus can lead to different behaviors depending on internal state.

It is currently unclear how internal state is represented and maintained within neural circuitry and how it acts to modify perception and behavior. We will study internal brain states in the fruit fly Drosophila melanogaster. Fruit flies are driven to eat by hunger related to various metabolic factors. We will examine whether and how these changes activate self-sustaining patterns of brain activity that represent internal state and heighten sensitivity to taste and smell and trigger food seeking. Using a new high-speed imaging technique called SCAPE to track activity in the entire fly brain, we will identify activity patterns that generate appetite and trigger food seeking behavior. We will also study a second important internal state in the fly, sexual arousal. Male flies become aroused when they sense certain chemical signals from a female, and this leads to courtship behavior that includes a song generated by wing vibration. The male keeps singing until the female decides to mate or decamp. This state of arousal involves neurons in the fly’s brain that respond to the female’s chemical signal and generate self-sustaining patterns of brain activity that persist well beyond the original sensory input. We will use imaging tools to record brain activity and analyze how a state of arousal triggers courtship behavior. Taken together, these lines of inquiry will offer insight into the poorly understood interplay of sensory input, internal state and action.

FULL PROJECT

More
Theory of distributed persistent activity in large-scale brain circuits

Working memory, the ability to temporarily hold and manipulate pieces of information in our minds, is crucial for thinking and flexible behavior. Scientists often study working memory in the lab by training subjects to make a response based on information from a few seconds in the past.

These ‘delayed response tasks’ suggest that working memory, at the level of brain cells, is the result of self-sustaining patterns of neural activity — a group of neurons starts firing when the initial information is presented and maintains that firing internally when the stimulus is no longer present until the animal acts on that information. Researchers made this conclusion based on years of work painstakingly recording the activity of one neuron or a few neurons in a single local area of the brain at a time. However, neural activity patterns associated with working memory are distributed across many brain regions. Now, recent advances allow scientists to record from many hundreds of neurons at once from several parts of the brain of behaving animals. In addition, brain ‘connectomics’ provides quantitative information about how different brain regions are connected. We will form a collaborative team with four laboratories to take advantage of these new tools and rich data to create anatomically realistic computational models of large brain networks that underlie working memory. We will develop these models based on experimental data collected in mice and monkeys performing delayed response tasks. In one task, for example, a mouse will be exposed to two different somatosensory stimuli, then learn a rule that tells them which of two spouts to lick based on each stimulus. We can study how the brain maintains that information in working memory by analyzing brain activity during the delay between the stimulus and the resulting action. Brain recording technology such as the new Neuropixels probe will allow us to make an unprecedented ‘neural activity map’ highlighting how different brain regions behave during the task. We can then develop computer and mathematical models based on this map, revealing how multiple regions of the brain work together to sustain working memory. Progress in this research project has the potential to provide insights into large-scale circuit mechanisms of cognitive deficits associated with schizophrenia and other mental disorders.

FULL PROJECT

More
Toward Computational Neuroscience of Global Brains

Neural pathways are composed of hundreds, thousands, or millions of neurons. Yet, much of our understanding of the brain comes from recordings of the electrical activity of one or a few neurons at a time. A deeper understanding requires experimental and theoretical investigations of entire neural pathways, or, ideally, entire brains.

We propose to take advantage of recent technology that allows recording of brain-wide neuronal activity in larval zebra fish. We will analyze the activity of entire neuronal pathways distributed across the fish brain while the fish orients itself within complex visual scenes, and also monitor the ongoing activity of neurons when the fish is at rest. An important part of this project is to develop methods for modeling brain function at multiple spatial and temporal scales. That is, we will construct theoretical and computational models that can account for the neural activity of small or large groups of neurons as they change rapidly or slowly over time. Through collaboration among multiple labs we aim to accomplish three goals. First, we will collect and synthesize anatomical and physiological data of single neurons and the connections among them. Second, we will build, analyze, and simulate neural circuits. Third, we will simulate new visual stimuli and behaviors to help design new experiments. Our lab will focus on developing models and analytical methods, the lab of Florian Engert of Harvard University will design the experiments and collect the data, and the lab of Daniel Lee of the University of Pennsylvania will apply sophisticated techniques from computer science to simulate brain circuits and zebra fish behavior. Our research will provide new ideas and tools that will not only be applicable to zebra fish, but to large scale recordings of neuronal activity in the brains of other organisms.

FULL PROJECT

More
Towards a theory of multi-neuronal dimensionality, dynamics and measurement

The brain is composed of billions of neurons, yet, until recently, technology only allowed for recording the electrical activity of a single neuron at a time. Now, with President Obama’s BRAIN initiative, significant resources will be channeled into developing technology to record from thousands or millions of neurons at once. However, in order for this data to be useful, we must also develop the mathematical tools to analyze and interpret such large and complex data sets.

Our group is positioned at the vanguard of such efforts to develop theories that guide biological interpretation of complex data sets. In addition to developing general theories uniting dynamical systems and high dimensional statistics, our group will collaborate with experimental laboratories to test these theories in monkeys as they plan and execute complex reaches in the face of external forces that interfere with their actions. Our aim is to develop a “Rosetta Stone” between biology and mathematics that will enable us to interpret large data sets from multi-neuronal recordings not only when monkeys reach, but for any intricate behavior emerging from the collective activity of millions of neurons.

FULL PROJECT

More
Understanding neural computations across the global brain

The brain is composed of networks of thousands or millions of neurons, and modern neuroscience is just reaching the stage where researchers can observe most, if not all, of these neurons in living animals. Yet, many of the most fundamental questions in neuroscience still remain unanswered. How does the brain represent sensory information? How do responses to sensory information change when paired with good or bad outcomes? How do the brain’s sensory and motor systems change during learning?

One obstacle to answering these questions is that measurements of the activity of every neuron in the brain generate massive, complex datasets that researchers are only now learning to interpret. We have positioned our team at the forefront of these efforts to capture and interpret such complicated neural data. We are developing an open-source computing library for interpreting the large-scale data produced by whole-brain recordings in larval zebra fish. These computing strategies will incorporate cutting-edge technology from disparate fields such as machine learning, data mining, and distributed computing. In addition, this combination of experimental data and computational analysis provides a platform for fruitful collaboration between experimentalists and theorists. Our work will be particularly attractive to theoretical neuroscientists because brain simulations can be directly compared to the data: Since the recordings are from the entire brain, there is no need to make assumptions about neuronal activity. Such a partnership promises to bridge the gap not just between theory and experiments, but also to provide fundamental insights into the brain’s biology.

FULL PROJECT

More
Understanding visual object coding in the macaque brain

When we perceive a face, we see multiple aspects, such as its location in space, its overall 3-D shape, and the color of the lips and eyes. We know that these different attributes are represented in distinct parts of the brain. But how are they bound together to form a coherent whole? That is, how do we create a visual object out of a barrage of sensory stimuli?

This so-called “binding” problem is one of the central challenges to understanding how large groups of neurons work together to encode information. Studying object perception is also a way to study another central challenge of neuroscience: how internal states of the brain give rise to perceptions. For example, when one sees a face and then a vase in the famous face-vase illusion, nothing is changed in the sensory stimulus; yet, what we perceive changes. This change must be the result of changes in the state of the brain that are independent of sensory input. Working in monkeys, we will train them to recognize a specific object in an otherwise cluttered visual scene. By using state-of-the-art techniques for recording and manipulating the activity of neurons, we can study visual object formation at the level of neural circuits. With this experimental setup, our data will provide insight not only into object formation, but also into the fundamental problem of binding. Given the similarity between monkey and human brains, we expect these insights to apply to humans as well.

FULL PROJECT

More
Whole brain calcium imaging in freely behaving nematodes

How does the collective activity of individual neurons in the brain generate actions? This most fundamental question in neuroscience remains unanswered in part because we lack methods to record from every neuron simultaneously in an animal that is allowed to move about freely. We propose to start small by developing a new instrument that will allow us to record from every neuron in a microscopic worm’s brain as it freely moves about.

This worm, known as C. elegans, only has 125 neurons in its head. This is opposed to rodents or humans, which have millions or billions of neurons, respectively. To record from the worm’s brain, its neurons will be genetically engineered so that they emit flashes of light every time they are active. We are developing a new kind of instrument—a sophisticated microscope—that can both record the activity of every neuron as well as track the animal’s movements in real-time. Developing this microscope will require a multidisciplinary effort that draws upon techniques from physics, electrical engineering, computer science, and molecular biology. Our technology will lead to the first brain-wide models of how neural activity leads to actions. We are starting small, but the work here in C. elegans will serve as a launching pad for further investigations into the mammalian brain, and, one day, this tiny worm may even shed light on how the human brain works as well.

FULL PROJECT

More
Advancing Research in Basic Science and MathematicsSubscribe to SCGB announcements and other foundation updates