Projects

Analyzing a complex motor act at the mesoscopic scale

Our motor systems are impressive. We can learn to effortlessly produce highly skilled muscle movements, such as hitting a golf ball or playing the violin. Our brains must step through sequential patterns of activity that enable these kinds of behaviors, but we know very little about how this is accomplished.

To investigate this issue, our laboratory studies another skilled, complex motor behavior: the courtship song of the zebra finch. This song is learned from the bird’s father in a process that parallels speech acquisition in infants. Yet, unlike speech production, much is known about the neural circuits that produce singing behavior. The song is driven by a network of brain structures, one of which contains the circuitry for generating the sequence of syllables that compose the song. Different neurons in that region show a flurry of activity at different points in time during each song. Little is known about how the song is represented in these neurons, in part because we lack methods to record from multiple neurons simultaneously in this brain area. Using 2-photon microscopy, we have developed a method to watch the neurons in the brain of a zebra finch while they produce the song. Our lab will work in close collaboration with theoretical neuroscientist Liam Paninski to develop quantitative methods to analyze these data. Our results are already beginning to shed light onto how complex, learned motor acts are generated not just in birdsong, but also in other complex movements both in health and disease.

FULL PROJECT

More
Attentional modulation of neuronal variability constrains circuit models

Each time you look at a picture on a wall, even though you see the same thing, the electrical activity of neurons in your brain’s visual areas will be slightly different. These slight differences are known as variability, and typically researchers seek to remove variability from their data by taking the average of neural responses over many trials of whatever task they are studying. Then, they build theoretical models based on the averaged data.

However, researchers have run into trouble with this approach: many different models seem to explain the same data, making it difficult to determine which model is correct. We propose a radically different approach in which we embrace the variability, and use it to constrain new theoretical models. We will use visual attention to test our model, in which observers focus on particular parts of a complex scene. We will then record the activity of large groups of neurons, and examine how visual attention modulates the variability in those neurons. The goals of our research are threefold. Can our model account for the existing data on how attention changes variability? Can we conduct experiments to determine how attention affects the extent to which variability is shared among neurons? Can we extend our model to this new data we collect? By focusing on variability, we can overcome many of the shortcomings of previous models, establishing a new approach for studying neural circuits in the visual system and other areas of the brain.

FULL PROJECT

More
Catching fireflies: Dynamic neural computations for foraging

Imagine catching a firefly. There are multiple steps: seeing the firefly, estimating its location in between flashes, deciding when to move to catch it, and then moving. Each step engages different brain areas—for example, seeing the firefly activates visual processing regions whereas moving activates motor systems—yet how these areas interact to produce an action based on sensory experience is a mystery.

We have designed an experiment to get at these interactions using, it turns out, virtual fireflies. Working in monkeys, we trained the animals to forage through a virtual environment for flashing specks of light. As in real life, such a virtual task engages a variety of brain regions, including those involved in sensory and perceptual processing, navigation, decision-making, and movement. While previous work has studied each of these brain regions individually, we have taken the research one step further to study many regions at the same time. Using sophisticated recording technology, we simultaneously monitor the electrical activity of neurons in each brain region, allowing us to observe the flow of information from brain region to brain region as the animal performs the foraging task. Because even the simplest tasks require the precise coordination of many neural networks distributed across many different brain areas, this approach will be broadly applicable to studying many tasks the brain performs.

FULL PROJECT

More
Circuit mechanisms of social cognitive computations

While we may take for granted social tasks such as inferring what others are thinking and feeling, or predicting someone’s next move, they pose considerable processing challenges for the brain. Social cognition is incredibly complex, with accurate judgments often made with very little evidence, yet we perform these tasks automatically, virtually every moment in time.

One skill at the core of social cognition is facial recognition, itself a complex task, which is found in several species of primates, including humans. Face recognition is supported by a dedicated face-processing network, which encompasses many of the organizational features of the brain as a whole. Because it is so closely devoted to the processing of the diverse social cues the face conveys, studying this network is an ideal entry point to investigate the neural mechanisms of social cognition. We will study activity in this model system at many different levels. In our experiments, we will measure the activity of individual neurons, or even parts of neurons, and we will measure the activity of local networks of neurons within a face-selective brain region. We will develop cutting-edge machine-learning techniques to analyze and understand this high-dimensional data and to create models of how this network processes information. This work will allow us not only to understand how faces are processed, but how neurons interact to generate the key characteristics of social cognition. With this work we also aim to lay the foundation for future work in autism models to understand the neural mechanisms underlying the alterations of social information processing in this condition.

FULL PROJECT

More
Computational principles of mechanisms underlying cognitive functions

The brain is composed of billions of neurons. Many of these neurons respond to particular features of the sensory inputs. For example, in the visual system, there are neurons that respond when viewing a horizontal edge but not when viewing a vertical one. These types of neurons have been studied for a long time.

However, when the brain is asked to perform more complex tasks than simply observe an individual feature of a visual stimulus, a new class of neuron emerge. These neurons are not so simple: they respond to multiple features of the sensory input and their response is modulated by our expectations, our feelings and more generally by any aspect of our thoughts. Said to have so-called “mixed selectivity,” these neurons were ignored for many years, but, thanks to work from our group and others, their importance is just now being appreciated. That is, even though their responses are not easily interpretable, they play a critical role in solving complex cognitive tasks. In abstract terms, these neurons increase the ability of downstream “output” neurons to generate a much larger set of responses for a given set of inputs. This enhanced ability likely underlies the capacity to perform a complex and flexible set of actions based on the same sensory input. Working in close collaboration with experimental neuroscientist Daniel Salzman, we aim to understand the role of mixed selectivity neurons in processing complex cognitive tasks. We will use technology for simultaneously monitoring the activity of many neurons in multiple brain regions to determine what information is contained in those brain areas and how that information is related to how animals behave. The close link between theory and experiments will facilitate the development of new mathematical tools for analyzing the collective activity of neurons, bringing us one step closer to understanding not just how neurons encode simple stimuli such as horizontal and vertical bars, but perform complex tasks such as decision-making.

FULL PROJECT

More
Cortical mechanisms of multistable perception

Some visual patterns can be seen in more than one way, like the famous Necker cube which reverses its 3D orientation spontaneously. These “multistable” stimuli are especially useful for the study of brain mechanisms of perception, because identical stimuli can give rise to different percepts, giving us direct access to the mental states that underlie perception.

We plan to use this access to explore the areas of the monkey brain that control switches in perceptual state. We will use three-component moving "plaid" patterns because we know that they are processed by a well-studied area, MT, which we have studied in the past. We plan to extend these observations by measuring the activity of multiple neurons simultaneously, and by relating them to the animals' reports of changes in stimulus appearance. MT is certainly not the only brain region involved in perceptual multistability, though it might show related activity. We will therefore also record from areas that are likely to control perceptual switches, investigating how neuronal activity in those areas drives changes in the monkey’s perceptual state. Our work will not only shed light on neural activity in the visual system and its perceptual and behavioral consequences, but will also broader links between brain activity and behavior.

FULL PROJECT

More
Corticocortical signaling between populations of neurons

How the human brain’s nearly 100 billion neurons interact with each other to give rise to cognition, emotion, and action is a total mystery. Neurons are embedded in dense networks, with thousands of connections to other neurons. These networks have connections at both the local level, between neighboring neurons, and at the global level, as projections between brain areas.

Yet even the basic principles of how all of this complicated networked activity leads to thought and behavior are unknown. As we unravel this mystery, understanding the interactions between brain regions will be critical. One roadblock has been that previous studies have focused on correlations between pairs of neurons or field potentials, each recorded in a different brain area. Yet neurons may also communicate at the level of groups, or populations. Emerging technology now allows us to measure the activity of hundreds of neurons in multiple brain areas simultaneously, and advances in statistical analysis are beginning to decipher this complex population-level activity. In our research, we will focus on this question by studying the visual system of monkeys. We will place electrodes in two brain areas involved in processing visual information, V1 and V2. Neurons in V1 send information about visual scenes to neurons in V2, complicated by some feedback connections as well, with V2 sending information back to V1. With help from sophisticated statistical modeling, we will determine what information is passed along, in which direction, and why. This will lead to experiments in which we compare brain activity between brain areas V1 and V4 while monkeys report decisions based on their visual percepts. By studying and modeling the flow of information across multiple brain areas, our results will provide insight not only into the visual system, but also into how any collection of brain areas cooperate to give rise to perceptions and decisions.

FULL PROJECT

More
Decoding internal state to predict behavior

The behavioral repertoire of any animal is ultimately determined by the activity of its brain cells. Yet, how exactly neural activity leads to actions remains unknown. Which patterns of neuronal activity trigger which behavior? How does an animal string together many actions to accomplish a goal?

Understanding these processes will not be trivial, and will require deep insight into animal behavior, recordings of the activity of many neurons at once, and sophisticated mathematical models. We propose a collaboration among three laboratories to bring these techniques to bear on such fundamental questions in neuroscience. Working in mice, we will use a novel system to record the electrical activity of many neurons at once while simultaneously monitoring a freely-moving mouse’s posture in 3D. We can then use mathematical models to determine how neural activity relates to the mouse’s movements. We will specifically focus on a brain area known as the striatum, and look to see if the mouse’s voluntary choices to move are reflected in neural activity in the striatum. In this way, we can address the fundamental question of how an animal’s behavioral state is represented in neural circuits. Given the similarities in brain structure between mice and humans—though the mouse brain is of course far less complex—we expect our insights to apply to humans as well.

FULL PROJECT

More
Dense mapping of dynamics onto structure in the c. elegans brain

In the human brain, billions of neurons wired into networks sense the environment; process, store, and retrieve information; and produce thoughts and behavior. However, it’s not understood how the activity of neurons gives rise to these cognitive processes. A major program of neuroscience is to address this question by building so-called “connectomes,” or complete wiring diagrams of the connections among neurons.

These efforts are occurring in a variety of organisms, including humans. However, the projects are challenged by the sheer complexity of the connectomes, even in simple organisms such as fruit flies and mice. There is also a lack of a functional framework to interpret anatomical data from connectomes. The lack of a functional framework is most readily apparent when considering the story of the simple worm known as C. elegans. 25 years ago, the connectome of C. elegans was mapped in exquisite detail— and C. elegans remains the only organism with a complete wiring diagram—yet it is still largely unknown how this brain produces any type of behavior. Taking advantage of the relative simplicity of C. elegans, our lab has developed a new microscopy technology that can, for the first time, capture the real-time activity of every neuron in a whole brain. We will use this platform, coupled with a new experimental approach to arouse the worms into action with complex stimuli, as well as sophisticated computational methods, to unravel the relationship between the connectome and functional activity in the worm brain. This basic “function-from-structure” question is of fundamental significance to the multitude of brain-mapping projects underway in more complex organisms. Our work in the worm will establish a baseline for how useful a connectome is to elucidating brain function, influencing a range of connectome projects, including those involving the human brain.

FULL PROJECT

More
Dynamics of neural circuits, representation of information, and behavior

How the brain holds information in short-term memory is a mystery. Individual neurons can only hold information for a short period of time, much shorter than what would be required to store a memory. So how do brains maintain short-term memories?

A number of theoretical models propose that the key is in the activity not in single neurons but of populations of neurons. Working in mice, we have developed a simple task to test whether groups of neurons maintain memories. Mice are allowed to explore an object with their whiskers. Based on this sensory evidence gathered by whisking, the mice decide about the object’s location, which they will later indicate by licking in one direction or another. A key aspect of the experiment is that we impose a short waiting period of a few seconds between when the mouse decides where the object is and when the mouse licks to indicate its decision. In this manner, the mouse must hold the decision in its brain for a few seconds: in other words, we’ve trained the mouse to create a short-term memory. Then, using technology that allows us to monitor groups of neurons simultaneously and manipulate their activity, we can test whether these neuronal populations can indeed hold the information relevant to a memory. Many theoretical models propose ways in which the collective activity of neurons holds memories, and our experimental set-up will allow us to test the evidence gained from neurons against these theoretical models. These results will not only shed light on how memory is stored in neural circuits, but will explore the much more general question of how neural activity represents information in the brain.

FULL PROJECT

More
Global brain states and local cortical processing

The cerebral cortex—the outermost, wrinkled layer of the brain—is involved in many cognitive processes, including sensory processing, decision-making, and movement. It’s generally thought that these tasks are carried by so-called “local” circuitry, which is composed of networks of nearby neurons. However, the brain also experiences what are called “global states,” such as attention, arousal, or motivation.

These global states can affect the processing carried out by local circuits. Exactly how global states influence local circuitry, however, remains a mystery. We plan to investigate these fundamental interactions between local computations and global brain states. Global brain states are characterized by slow oscillations of neuronal activity across the entire cortex—what are commonly referred to as brainwaves. When the brain needs to perform a certain task, however, it carries it out using local circuitry. We hypothesize that when local circuitry is in use to perform a task, the cortex stops oscillating near that local region. To test this hypothesis, we plan to use imaging techniques to visualize global patterns of brain activity while a mouse transitions from one global brain state to another—in this case, the global brain state will be changed by having the mouse engage in a visually demanding task followed by passive viewing. But how do changes in global brain states affect processing in local networks of neurons? One hypothesis is that global brain states affect the responsiveness of neurons. For example, during the visually demanding task, the global brain state may increase the responsiveness of certain neurons related to visual processing. To test this hypothesis, we plan to visualize the activity of local groups of neurons and compare their responsiveness as the mouse transitions from a visually demanding task to passive viewing. Finally, to understand exactly how this brain-state modulation of local processing is carried out in individual neurons, we plan to visualize the responses of one neuron at a time. The hypothesis is that one part of the neuron—termed the apical dendrites—carries information about global brain state to the neuron’s main processing center, termed its soma. Our research will reveal how global patterns of activity affect processing of local circuitry, shedding light on a common motif in neuronal computation.

FULL PROJECT

More
Hidden states and functional subdivisions of fronto-parietal network

The essence of cognition is choice, and to understand choice we need to understand how neurons in the brain generate decisions in complex settings. To make a decision, the brain must combine different sources of information by weighing them based on their relevance.

Past studies have typically gained insight into neural mechanisms of decision-making by recording from one or a small number of neurons at a time and by analyzing the average neural responses across several decisions. However, this approach has its limitations and understanding how decisions are made at the neural level has so far proved challenging. We plan to record from hundreds of neurons simultaneously in multiple brain areas while monkeys make a choice. We expect to be able to predict the monkey’s choices ahead of time based on our large-scale neural recordings. In addition, we will use the data to understand how different functional modules in the brain communicate and coordinate their response to form a decision and execute an action. Our studies will enable better understanding of cognition, and guide development of artificial systems that can mimic human decision-making. Moreover, they will facilitate better treatments for deficits in the decision-making process in cognitive and mental disorders.

FULL PROJECT

More
Higher-level olfactory processing

How do complex patterns of activity in large populations of neurons gain meaning and guide behavior? This question can be addressed in the olfactory systems of fruit flies and mice because, in both of these organisms, odors are represented randomly in key olfactory circuits: the mushroom body in flies and the piriform cortex in mice.

These random representations have no inherent meaning, and thus meaning must be imposed through learning. Study of these systems will involve extensive collaboration between experimental and theoretical researchers. The experimental work will make use of genetic techniques for visualizing and manipulating neural activity. The theoretical work will involve building models to test hypotheses arising from the data and providing predictions that lead to further experimentation. The goal of these studies is to illuminate how initially arbitrary patterns of neural activity gain mean through experience-dependent learning.

FULL PROJECT

More
How the basal ganglia forces cortex to do what it wants

It is well established that multiple brain areas are involved in generating movements, yet individual brain areas are often studied in isolation. A wealth of data indicates that the basal ganglia interact with the cerebral cortex and are important for movements. For example, diseases of the basal ganglia, such as Parkinson’s, cause profound movement disorders.

Understanding the role of the basal ganglia in movements will therefore be critical to understanding movements in general, and to finding cures for movement disorders. Neurons in the basal ganglia and those in the cerebral cortex are intimately connected to each other, indicating that these two regions likely act together during movements. But what is the relationship between neuronal activity in the cortex and neuronal activity in the basal ganglia? Our goal is to understand this relationship in both monkeys and rodents. One idea is that the basal ganglia biases, or forces, the cortex to choose one action over another—especially an action that could lead to a reward. The basal ganglia’s known role in predicting rewards is consistent with such a relationship. We will test this possibility by using technology to record the activity of many neurons at once, and also to manipulate the activity of those neurons while the animals perform various movement-related tasks, such as reaching. In general, our work will shed light on how two major brain areas contribute to movements based on reward, paving the way for insights into movement disorders, such as Parkinson’s.

FULL PROJECT

More
Interaction of sensory signals and internal dynamics during decision-making

Imagine deciding whether or not to eat a piece of cake. When making that decision, there are two main factors. First, is the cake appetizing? That is, does the sensory experience of the cake compel you to eat it? Second, are you hungry?

That is, is the internal state of your brain—hungry or not—conducive to eating the cake? Every decision depends upon these two factors—incoming sensory stimuli and internal brain state—yet how they interact in neural circuits remains a mystery. We are working in mice to study how internal states are combined with sensory experience to guide decisions. We have trained mice to make choices about various types of sensory stimuli. To investigate how internal states influence these perceptual decisions, we plan to take advantage of the fact that, even without training, each mouse has an inherent bias to choose one choice over another. This bias will be reflected in the internal state of the brain. So, because of the pre-existing bias, even when the animal is making a decision about the same sensory stimulus, the internal brain state will be different depending on which choice the animal makes. Then, we will record the activity of large populations of neurons. With this technique, we can compare the neuronal activity to the same sensory stimulus during different brain states—that is, when the choice is the same or different from the mouse’s initial bias. By computationally modeling the network of neurons, we will be able to determine how bias—i.e., the internal brain state—affects the activity of the neural network involved in decision-making. Finally, once we have established how brain state changes the activity of the population of neurons, we can use sophisticated genetic techniques to examine which neurons in the population contribute to the changes in brain state. By incorporating information about internal brain states, these experiments will allow a much deeper understanding of decision-making.

FULL PROJECT

More
Large-scale cortical and thalamic networks that guide oculomotor decisions

The cerebral cortex—the outermost, wrinkled layer of our brains—contains a variety of brain areas that are connected to form neural networks. Some areas of the cerebral cortex, including two areas known as PFC and PPC, are specialized to be involved in movements to a goal or target.

These movements might be rapid eye movements, known as saccades, or arm reaches. PFC and PPC contain patterns of neural activity that predict, when faced with choices about which movement to make, the impending decision. Other brain areas outside of the cortex, such as the thalamus, are also involved in that choice. Our goal is to dissect the activity of the neural networks formed by PFC, PPC, and the thalamus by building computational models and testing them with experiments. We specifically plan to study the connectivity of the networks in these three brain areas. We have developed wireless technology to record the activity of neurons in many brain areas at once. We will take advantage of this new technology to record neurons while monkeys are trained to select a target with a saccade. This set-up will allow us to examine network processing while the monkey is making decisions. We can then use sophisticated genetic techniques to assess how directly the PFC, PPC, and thalamic networks influence one another, and how information flows through those neural networks. Working in collaboration with theoretical neuroscientists, we will build computational models of these networks to study the interactions among all three brain areas and test the model’s predictions with new experiments. This arrangement allows for a close marriage of theory and experiment, and should provide general insights into how multiple brain areas work together when making decisions.

FULL PROJECT

More
Large-scale data and computational framework for circuit investigation

Decades of research have uncovered fundamental principles about how the eye works. The cornea and the lens act in concert to form an image of the world on the retina, a paper-thin sheet of neural tissue that lines the back of the eye.

The retina is a laminar structure with about 60 distinct types of neurons. The input layer contains specialized photoreceptor cells which sense light. The signals from the photoreceptors are processed by neurons in the other layers, and the processed output is transmitted on the optic nerve to the brain. Because of this processing, the role of the retina in vision is much more complex than that of reporting camera-like images to the brain. However, we lack a unified model that integrates the considerable knowledge about the retinal components and offers a means of calculating the flow of signals from the incident image, through the retina, and onto the optic nerve. We are spearheading an effort to develop such a unified computational model, which will be based on experiments from laboratories around the world, including our own. This model will incorporate existing information about the physiological optics of the eye, as well as key features of retinal neural circuits. By synthesizing and refining our understanding of the eye, this model will provide a foundation for further investigating the human visual system, evaluating approaches to restore eyesight, and other practical applications in science, engineering, and medicine.

FULL PROJECT

More
Mechanisms of context-dependent neural integration and short-term memory

The way in which neurons process information determines how the brain functions. For example, in order to make a decision, neurons accumulate evidence towards different, competing alternatives, eventually allowing an animal to make one choice over another.

These accumulating neurons are termed “neural integrators.” Neural integrators can be thought of as short-term memory systems that store a running total of the inputs they have received. While easiest to understand in terms of decision-making, neural integrators show up in a variety of other brain processes. For instance, in motor control and navigation, when signals that contain information about the velocity of movements are integrated, they become signals that contain information about body position. In either of these cases—decision-making or motor control—the accumulation and storage of information can be context-specific. That means that different information is accumulated depending on the conditions under which a task is performed; in other words, the task’s context. We will investigate such context-specific accumulation and storage of information in a well-studied system in the brain: the oculomotor system that controls eye movements. This system contains a neural integrator circuit that converts eye velocity signals into eye position signals. This conversion is context-specific: the circuit maintains distinct patterns of neural activity depending on whether a given eye position has been reached through a sudden, rapid eye movement or a slower, tracking eye movement. We will develop a computational model of the oculomotor neural integrator system in larval zebra fish, and collaborate with experimentalists to incorporate data from the activity of every neuron in this system. We will test our model predictions using sophisticated genetic techniques to manipulate the activity of neurons to determine the core features of the process of neural integration in different contexts. This work promises to reveal ubiquitous mechanisms by which neurons accumulate information and store it in short-term memory, applicable not only to motor systems, but also to higher-level cognitive processes such as decision-making.

FULL PROJECT

More
Modulating the dynamics of human-level neuronal object representations

How does the brain give rise to the mind? In other words, how does neural activity give rise to thoughts and perceptions? This is one of the deepest questions in modern neuroscience, and we propose to approach it by studying how monkeys perceive visual objects.

Such “visual object recognition” is a critical step to understanding the environment, and it underlies such diverse cognitive processes such as judgment, planning, and decision-making. Yet, how the brain’s neural circuits work together to decipher complex visual scenes remains a mystery. To tackle this challenging problem, we propose to record the electrical activity of hundreds of neurons across multiple brain areas that underlie visual object recognition in monkeys. We will also develop a computational model that links neural activity in these brain regions to the visual object perceived by the monkey. Our model will be able to predict the perception of the monkey from the measured neural activity. Because monkeys have very similar visual systems to humans, we expect that this work will directly inform how the human brain is solving these same problems.

FULL PROJECT

More
Network mechanisms for correcting error in high-order cortical regions

Unaided by modern technology, our very survival would depend upon our brain’s ability to accurately map external space. Returning home or to a safe haven, for example, requires that we identify our location in space, remember the way back, and navigate there.

For over a century, scientists have asked how the brain represents and remembers our external environment. Only in the last few decades, however, have researchers discovered the basic building blocks of an internal, neural navigation system. This system depends upon neurons in a brain region called the medial entorhinal cortex, which translate the external environment into an internal map of space. In the medial entorhinal cortex, neurons called “grid cells” provide the basis of this internal map of space. A given grid cell increases its activity when an animal or human moves through particular locations in space. For example, if a mouse runs in a straight line, a grid cell will increase its activity every 30 centimeters, relaying to the mouse how far it has run through space. Grid cells, therefore, generate a representation of space similar to a longitude and latitude coordinate system. While originally discovered in rodents, grid cells have now been found in a range of species, from bats to humans, suggesting they are fundamental to how brain remembers location and navigates the animal through space. However, despite the ubiquity of grid cells, how the brain actually creates neurons that respond this way is largely unknown. Our team will address this gap in knowledge by aiming to identify how sensory information can help the spatial maps created by grid cells remain stable over time. Taking an interdisciplinary approach, we will record the electrical activity of neurons in the medial entorhinal cortex of mice while they navigate through space. In addition, we will develop a computational model to investigate the exact way in which these neurons act to generate their external representations of space. Our work will more broadly address how neural activity in the medial entorhinal cortex gives rise to spatial memory and navigation not just in mice, but in other animals, including humans, as well.

FULL PROJECT

More
Network properties and plasticity in high- and low-gain cortical states

Neural networks in the brain have their own intrinsic activity independent of incoming sensory stimuli. This intrinsic activity is known as the global brain state, and the brain’s state will affect the processing of incoming sensory stimuli.

For example, what we notice in the world is a function of what we pay attention to. Another way to say this is that the global brain state of attention determines what sensory stimuli we process. But exactly how, at the level of neural circuits, do global brain states influence sensory processing? Working in the visual system of mice, our lab has made significant progress towards answering this question. We have found that locomotion puts the visual cortex of mice into a so-called “high gain” state, which renders it more sensitive to visual stimuli. In this state, visual experience can cause long-lasting changes in the adult brain. We will allow mice to walk through a simple virtual reality environment in which they learn a foraging task, associating certain visual objects with a reward. When the mouse virtually passes one of these objects, he is trained to pause and lick. We Will study the mouse under two conditions: when the mouse is moving through the virtual environment on its own, or when the mouse is still and the virtual objects move past the mouse. In both cases, the mouse has to lick the passing objects for the reward. But in the two cases, even though the sensory stimuli remain the same, the brain state is different: the mouse is either moving or not. We will then implant electrodes that allow us to monitor the activity of multiple neurons at the same time. We hypothesize that the mouse will be better at the foraging task when moving because locomotion puts the brain in high gain state. We will then use sophisticated genetic techniques to manipulate neural activity and mathematical analyses to determine which features of activity in the high gain state are responsible for the increased learning. Evidence from humans shows that global brain state can affect perceptual learning. Our results, therefore, will shed light not only on how movement-induced brain states influences visual perception, but also on how global brain states affect cognitive processing more generally.

FULL PROJECT

More
Neural circuit dynamics during cognition

How does the activity of neurons give rise to cognitive phenomena such as short-term memory or decision-making? To address this question, we will take advantage of an experimental setup we have perfected over the course of a decade.

In this setup, a rodent is allowed to navigate through a virtual reality world while we record the activity of many neurons in its brain at the same time. We can also use sophisticated genetic techniques to perturb the activity of neurons in the brain, allowing us to test models in a causal manner. The animals will perform such tasks as making decisions based on memories. We will analyze the data in collaboration with theoretical labs, which will bring statistical and mathematical expertise to the table. Because tasks such as making decisions based on memory cannot be explained by simple responses to sensory stimuli alone, we will be studying how the intrinsic activity of the brain—i.e., the brain state—affects choices. Because of the similarity of mammalian brains, we expect these insights to apply to humans as well.

FULL PROJECT

More
Neural Circuit Dynamics for Sequence and Variability

Songbirds produce a complex, learned sequential behavior that shares features with the most intriguing human behaviors, such as speech, musical performance and even thinking and planning. There is no other model organism in neuroscience that provides this unique combination of sophisticated behavior and access to the underlying brain circuits. Birdsong, like many complex human behaviors, is not innate but is learned by imitating the behavior of parents and other individuals. We have previously identified circuits in the bird’s brain that power this remarkable behavior. For example, we have found that a circuit, called HVC, serves as the clock, controlling the timing of song. We have also discovered that another circuit, LMAN, drives “babbling” in young birds, allowing them to explore different sounds as they learn to sing. These are examples of circuits that internally generate dynamic, varying patterns of neural activity—the engines that make our brains run. We are developing new technologies to simultaneously record from many neurons in singing birds, and working with theoretical neuroscientists to develop mathematical models of how these circuits function and how they emerge during development.

FULL PROJECT

More
Neural coding and dynamics for short-term memory

The brain processes sensory stimuli and uses this information in conjunction with goals and memories to generate actions. All these steps involve the representation of information and brain states in changing patterns of neural activity, or "neural codes".

We seek to understand which codes the brain uses to represent information; to characterize the advantages of those codes with respect to efficiency, ease of readout, and so on, compared to other potential coding strategies; to learn about the circuit architecture between neurons that allows the brain to possess such codes; and finally, to understand how the brain uses these codes to compute and generate actions well-suited for survival in the world. Our group develops numerical and analytical tools to help "crack the neural code" and model how it might be substantiated in real neural circuits. The project proposed here involves topics as abstract as the mathematical analysis of neural codes, to data-driven projects involving the analysis of neural data which seek to test models of how neural circuits are work. On the abstract end, we are focused on questions of coding efficiency, such as how the maximum amount of information that can be stored over time in various neural codes depends on inherent variability in neural activity and the size of the circuit. On the data-driven end, we plan to collaborate with laboratories studying a set of brain areas involved in spatial learning and memory in rodents, the hippocampus and the entorhinal cortex. Our work on both coding and circuit mechanisms for this project will be focused on "grid cells", which represent animal location in space in an unusual way, and are hypothesized to be involved in navigational computations that allow mammals to estimate where they are even as they move about. We will make realistic models of the neural circuits that give rise to grid cells. We will model how grid cells interact with the hippocampus to build maps of the world and enable navigation within it. Because the challenges of spatial inference are common to rodents and humans, and because the neural representations are similar, we expect our insights to shed light on the representations and mechanisms that underlie spatial memory and navigation in humans.

FULL PROJECT

More
Neural computation of innate defensive behavioral decisions

In general, our actions are not mere reflexes in response to external stimuli. Most of the time, external stimuli interact with internal factors—such as our emotions, or memories—to influence how we act. These actions, in turn, affect both what new stimuli we receive and also change our internal brain state. Together, these factors make for the complexities of behavior. This interplay between external stimuli and internal states is the focus of research in our laboratory.

We will study this interplay in mice, using so-called “approach and avoidance” behaviors. An approach behavior might include interactions with other mice, such as mating or fighting. An avoidance behavior might include freezing or fleeing in the face of a predator. Both behaviors depend on external stimuli—attack behavior, for example, may be triggered by seeing another mouse—but also internal factors such as the animal’s state of arousal. We will focus on attack behaviors. To study those behaviors, we will monitor the activity of many neurons simultaneously in brain areas responsible for vision and in those responsible for the mouse’s emotional state. With modern tools, these measurements can be taken even while the mouse performs the attack behavior. From our observations, we will develop computational models of the mouse’s behavior. To test these models, we can use sophisticated genetic techniques to manipulate or eliminate the activity of neurons and observe the effect on the mouse’s behavior. Because similar behaviors in humans engage emotions—via a part of the brain called the limbic system—an understanding of the neural circuits under study here will be highly relevant to psychiatric disorders such as PTSD, phobias, autism, or schizophrenia.

FULL PROJECT

More
Neural dynamics for a cognitive map in the macaque brain

The ability of the human brain to form a complex memory of an event experienced just a single time is astounding—and that memory can be stored and retrieved for decades. We sometimes casually refer to memory retrieval as “visualizing” the past—often with precise spatial and temporal detail intact.

But how is such a mental map created in the brain? Research suggests that some of the brain regions responsible for forming and storing memory of past events, such as the hippocampus and the entorhinal cortex, are the same ones responsible for spatial navigation, implying a deep connection between the two processes. However, how these processes are related, especially in primates, is largely unknown. Building on work in rodents showing that brain regions involved in memory are activated when the animals experience a virtual reality setting, in our experiments we will extend these results to primates. We will train monkeys to use a joystick to navigate virtual environments. Simultaneously, we will use several electronic brain implants that can measure the electrical activity of over a hundred neurons in the monkey’s hippocampus and entorhinal cortex. By analyzing this large set of neural data with sophisticated mathematical techniques, we can determine how these neurons represent the virtual environment. For example, it is known, in rodents, that certain neurons increase their activity when the animal traverses a particular, absolute point in virtual space. That is, the neuron will become excited when the animal visits the corner of the room, but not the middle. Other neurons will be active when the animal visits the middle of the room, but not the corner. In primate, how these neurons operate is less clear, but straightforward to study in a virtual reality environment. By working our the details of spatial navigation in primate memory brain regions, our work will lay the foundation for understanding how these mechanisms underlie the formation of complex memories, not only in monkeys, but in humans as well.

FULL PROJECT

More
Neural encoding and decoding of policy uncertainty in the frontal cortex

Imagine an autumn day. When you leave home, you need to decide which type of jacket to wear. Looking at the sky or checking the weather forecast can help you take a guess at the weather, but this prediction is subject to uncertainty.

The uncertainty in your prediction translates into uncertainty in your decision of which jacket to wear. How does the brain deal with such uncertainty when making decisions? We plan to investigate this question on two fronts. First, we will ask how brain areas responsible for selecting actions encode uncertainty about the outcome of a decision. Second, we will ask how uncertainty is decoded to form estimates of the outcomes of possible decisions. To investigate the first question, we will record the electrical activity of neurons in a brain area called the premotor cortex, working in mice. The premotor cortex has been implicated in the selection of actions. Because decisions with uncertainty involve probabilities, we will focus on evidence for how groups of neurons encode outcomes probabilistically. To investigate the second question—how uncertainty is decoded in the brain—we will record from a brain area known as the orbitofrontal cortex, which is densely connected to the premotor cortex. We believe that the uncertainty-induced variability in neuronal responses will decrease as the mouse learns to improve its estimates. To help analyze the data we collect, we will team up with a theoretical neuroscience lab that specializes in probabilistic approaches to neural coding. Our investigations will lay the groundwork for future studies that incorporate uncertainty into mathematical analyses and models. Because estimating uncertainty is a basic fact of life, our results will pave the way to understanding higher cognitive processes that are based on unreliable data, such as making a decision based on a weather forecast.

FULL PROJECT

More
Population dynamics across pairs of cortical areas in learning and behavior

The activity of neurons in the brain represents external sensory stimuli, internal cognitive states, and plans for upcoming motor behavior. Neurons communicate with each other through electrical impulses known as spikes.

By measuring spikes from small groups of neurons, researchers have made tremendous progress toward building sophisticated, data-driven models of brain function. However, technical limitations have so far prevented this approach from scaling to hundreds, thousands, or millions of neurons acting together. We have proposed a new collaboration between experimental and theoretical laboratories to develop models of brain function based on data from hundreds of neurons recorded in multiple brain areas simultaneously. We will investigate how learning changes the neuronal representation of sensory and decision-related information in cortex. At the core of our work is a newly developed imaging technology that allows for recording the spiking activity of hundreds of neurons at once. We have also developed powerful new statistical tools to help us analyze this data.

Working in the visual system of mice, we will address several key questions: How is visual motion encoded in multiple brain areas during decision-making? How is ongoing activity related to how the brain encodes sensory and decision-related information? How does this activity change over time during learning? Our research will shed light on how the brain encodes sensory information during decision making, and on how that encoding changes as we learn.

FULL PROJECT

More
Probing recurrent dynamics in prefrontal cortex

We all know that our thoughts change from moment to moment. In other words, it is rare for our brain to dwell in the same state for more than a fraction of a second. Yet, because of technical limitations, much of cognitive neuroscience has arguably studied “snapshots” of brain activity.

As a consequence, we have accumulated a wealth of knowledge about what the responses of brain areas or even individual neurons represent—for example, some neurons seem to respond selectively to faces—but we mostly lack an understanding of how these neural responses relate and contribute to the dynamic computations performed by large networks of neurons. In this project we attempt to address this shortcoming. Exploiting recent technical advances to record the activity of large numbers of neurons at once, we will move beyond the “snapshot” view of the brain to observe how the brain dynamically changes from one state to the next.

Working in monkeys, we will characterize the dynamic responses of the prefrontal cortex, a brain area largely unique to primates that underlies our ability to flexibly choose among possible actions under ever-changing circumstances. The monkeys will be trained to associate a sensory input—such as a visual stimulus—with a motor output—such as a rapid eye movement to a target. The monkey will have to make different associations based on the context of the situation, the rules of which the monkey will have to learn. Our goal is to develop a model of the neural networks involved in so-called context-dependent computations, and compare this model’s predictions to the experimentally obtained data. We can also take advantage of sophisticated genetic techniques to perturb the activity of neurons in the prefrontal cortex, and compare the effects of those perturbations on the monkey’s actions to the effects of simulated perturbations on the computational model. This close marriage of theory and experiments will provide a deeper understanding of the function of the prefrontal cortex, establishing a conceptual framework to explain how cognition and behavior emerge from the dynamic behavior of neural networks.

FULL PROJECT

More
Relating dynamic cognitive variables to neural population activity

Millions of neurons work together to produce perceptions, thoughts, and actions, but exactly how this happens remains a mystery. Recent advances in technology that can record the activity of many neurons at once have led to breakthroughs in understanding how populations of neurons behave. However, relating so-called population behavior to cognitive processes such as decision-making has proved a challenge.

We have made significant progress toward bridging this gap by developing a new method for tracking an important aspect of decision-making. In our experimental approach, subjects are engaged in a two-alternative decision-making task. They are presented with randomly-timed pulses of evidence in favor of one alternative or another, then, at the end of the task, each subject must decide which of the two alternatives had the greater number of pulses. We found that both rodent and human subjects used a decision-making strategy called “accumulation of evidence,” which tracks how sensory evidence accumulates for or against different choices during the formation of a decision. Having identified the decision-making strategy, we next plan to incorporate simultaneous electrode recordings of many neurons to relate population activity to the accumulation of evidence in decisions. Because such an accumulation of evidence strategy is thought to be a core component of perceptual, social, and even economic decisions, our method provides a unique opportunity to relate neural activity to a wide variety of cognitive processes.

FULL PROJECT

More
Searching for universality in the state space of neural networks

So-called “collective behavior” surrounds us. For example, the solidity of an object is the result of forces that act between neighboring atoms, on a scale nearly one billion times smaller than the object itself. Decades of work in theoretical physics has provided precise mathematical theories that explain how macroscopic phenomena such as solidity emerge from the microscopic interactions among atoms—i.e., from the collective behavior of those atoms.

Just as matter is composed of individual atomic units, the brain is composed of individual cellular units, termed neurons. Is it possible that perceptions, thoughts, memories, and actions can be described by the collective behavior of neurons? Could mathematical ideas from physics be applied to the brain’s biology? We are taking advantage of new experimental technology for simultaneously monitoring the activity of many neurons to test these very ideas. Starting with the raw data from neurons, we will borrow strategies from physics—such as the construction of thermodynamic variables from the statistical mechanics of atoms—to build descriptions of neural networks. Preliminary analysis from both the retina and a brain region called the hippocampus suggest that such constructions are feasible. This sets the stage for discovering universal principles governing the collective behavior of neurons, much as work in physics has described universal principles governing inanimate systems.

FULL PROJECT

More
Single-trial visual familiarity memory acquisition

We've all experienced a sense of familiarity—that vague sense that we’ve met someone, been in a room, or seen a movie before—even if we can’t quite remember the specifics. Recent experimental work has shown that we can pick out familiar images out of tens of thousands, even after only seeing the image one time, and for just a few seconds.

This remarkable ability to remember whether we have previously encountered specific people, objects or scenes is called “visual familiarity memory.” Impairments in visual familiarity memory, including those that accompany disorders such as dementia, lead to devastating inabilities to recognize one’s own family or home. Unfortunately, we currently have only a rudimentary understanding of the processing in the brain responsible for this type of memory. One reason for this is that previous studies were only able to look at one or just a few neurons at a time, and then averaged their activity in response to repeated presentations of a stimulus like a visual scene. However, for visual familiarity memory, which by definition can be formed with just a single exposure, averaging across repeated trials is not the best way to study it. New developments in measuring the activity of neurons allow us, for the first time, to observe many neurons at once as these memories are formed and retrieved.

We will examine multiple brain areas, such as the inferotemporal cortex, which is know to be involved in objet recognition, as well as the brain region that provides its input, the visual area V4, which processes more basic visual features. By studying the interactions between these two brain regions, we will determine exactly how familiarity is established. By elucidating these mechanisms, we can better understand some of the fundamental properties of our remarkable memories, and use these insights to unravel how memories break down in disease.

FULL PROJECT

More
Spatiotemporal structure of neural population dynamics in the motor system

Consider the simple act of reaching for a cup of coffee. The brain must decide to pick up the coffee, prepare for movement, and then execute the movement. Each stage of the process requires that neurons in the brain exhibit different activity.

Experiments have revealed that a part of the brain involved in movement, the motor cortex, is active both during the planning phase of movement and during the execution of the movement itself. But how can the same brain area—the same set of neurons—be responsible for two distinct phases? To answer this question, we must understand the patterns of activity exhibited by a set of neurons across time, a phenomenon termed “internal dynamics.” When we switch from planning to executing a movement, the internal dynamics of the same set of neurons change. This allows for the same neural network to control two different types of processes.

Working in the motor cortex, we will use recording technology that allows us to observe the activity of many neurons simultaneously. This way, we hope to address how the motor cortex changes its internal dynamics from the planning to the movement phase. Our recent results have allowed us to characterize mathematically such changes in internal dynamics. This mathematical characterization allows us to identify and study key transitions between brain states. Our research seeks to produce a better understanding of how the brain produces movement and how it transitions from one state to another more generally.

FULL PROJECT

More
The latent dynamical structure of mental activity

The human brain is composed of billions of neurons. Each neuron is capable of complicated computational feats, yet the brain as a whole is able to carry out vastly more complex functions than could any individual neuron alone.

This remarkable computational power of the brain hinges on the fine coordination among neurons. However, though we understand a great deal about how an individual neuron functions, we have little knowledge as to how their collective activity in neural circuits gives rise to computations in the brain. Much of what we have learned about the brain has come from experiments that record the electrical activity of one neuron at a time, then look at the average profile of the response of that neuron to a stimulus or a task. However, in the brain, there is no “average” response. Rather, the collective activity of populations of neurons reflects the unique response of the brain each time a stimulus is experienced or a task performed. Now, new technical advances have allowed us to monitor the activity of hundreds or thousands of neurons simultaneously. We will take advantage of this technology to develop algorithms and data-analytic tools to discover how neurons are coordinated in the brain, and how that coordination effects the brain’s computations. These tools will be developed and applied in close collaboration with experimental laboratories, studying a range of brain functions spanning the arc of perception to cognition and action. By identifying how the concerted action of neurons gives rise to these processes (instead of recording one neuron at a time), we will be able to see—as the brain would—how those computations unfold moment by moment.

FULL PROJECT

More
The neural basis of Bayesian sensorimotor integration

All nervous systems must operate under conditions of uncertainty. It is thought that one way the brain deals with this fact is to create prior expectations of the world based on experience, and integrate those prior expectations with incoming sensory input to decide and take an action. Combining prior expectations with incoming information is known in the field as Bayesian integration.

How neural circuits perform such Bayesian calculations, however, is unknown. Working in monkeys, we will combine a complex task with state-of-the-art technology to record from multiple neurons simultaneously and mathematical tools to uncover the principles by which neural circuits perform Bayesian integration. The task is as follows. The monkeys will observe two flashes of light, and then asked to reproduce the interval between flashes with a tap of the finger or a quick movement of the eye. The Bayesian calculation here is to integrate the memory of the interval (the prior expectation) with sensory information (the elapsed time between flashes). We will then use electrodes to record from brain areas thought to be involved in this task and analyze the activity of those neurons using sophisticated mathematical tools. With this approach, we can make specific predictions about how the neurons should behave based on our mathematical models, and test those predictions with experiments. More generally, our results will shed light on how animals integrate internal and external information to generate flexible, goal-directed actions.

FULL PROJECT

More
The neural substrates of memories and decisions

The neural substrates of memories and decisions. The abilities to learn, remember, plan, and decide are central to who we are. These abilities require that memories be stored, retrieved, and then used to guide decision-making. Current theories suggest that memories are stored when specific patterns of neural activity cause changes in the connections among neurons, and memories are retrieved when these patterns are reinstated.

While we have some idea of the brain regions required for these processes, we lack a deep understanding of how memories are created, remembered, and used. Our team and others have found a phenomenon—termed sharp-wave ripple events—that contributes to memory retrieval and memory-guided planning. Studying this phenomenon in rats, we have already developed advanced decoding algorithms that can determine the content—i.e., the memory being retrieved—from different patterns of neural activity. But to truly understand memory retrieval, we need to move beyond simple observation to the manipulation of neural circuits. To do this, we will combine new real-time decoding algorithms with technology for monitoring and manipulating the activity of many neurons at once. Because we can decode which memory is being retrieved, we will be able to selectively disrupt retrieval of that memory while leaving other brain processes unaffected. These experiments will make it possible to understand how the disruption of specific memory events affects the processing of those events in brain regions and thereby alters the animal’s behavior.

FULL PROJECT

More
Toward computational neuroscience of global brains

Neural pathways are composed of hundreds, thousands, or millions of neurons. Yet, much of our understanding of the brain comes from recordings of the electrical activity of one or a few neurons at a time. A deeper understanding requires experimental and theoretical investigations of entire neural pathways, or, ideally, entire brains.

We propose to take advantage of recent technology that allows recording of brain-wide neuronal activity in larval zebra fish. We will analyze the activity of entire neuronal pathways distributed across the fish brain while the fish orients itself within complex visual scenes, and also monitor the ongoing activity of neurons when the fish is at rest. An important part of this project is to develop methods for modeling brain function at multiple spatial and temporal scales. That is, we will construct theoretical and computational models that can account for the neural activity of small or large groups of neurons as they change rapidly or slowly over time. Through collaboration among multiple labs we aim to accomplish three goals. First, we will collect and synthesize anatomical and physiological data of single neurons and the connections among them. Second, we will build, analyze, and simulate neural circuits. Third, we will simulate new visual stimuli and behaviors to help design new experiments. Our lab will focus on developing models and analytical methods, the lab of Florian Engert of Harvard University will design the experiments and collect the data, and the lab of Daniel Lee of the University of Pennsylvania will apply sophisticated techniques from computer science to simulate brain circuits and zebra fish behavior. Our research will provide new ideas and tools that will not only be applicable to zebra fish, but to large scale recordings of neuronal activity in the brains of other organisms.

FULL PROJECT

More
Towards a theory of multi-neuronal dimensionality, dynamics and measurement

The brain is composed of billions of neurons, yet, until recently, technology only allowed for recording the electrical activity of a single neuron at a time. Now, with President Obama’s BRAIN initiative, significant resources will be channeled into developing technology to record from thousands or millions of neurons at once. However, in order for this data to be useful, we must also develop the mathematical tools to analyze and interpret such large and complex data sets.

Our group is positioned at the vanguard of such efforts to develop theories that guide biological interpretation of complex data sets. In addition to developing general theories uniting dynamical systems and high dimensional statistics, our group will collaborate with experimental laboratories to test these theories in monkeys as they plan and execute complex reaches in the face of external forces that interfere with their actions. Our aim is to develop a “Rosetta Stone” between biology and mathematics that will enable us to interpret large data sets from multi-neuronal recordings not only when monkeys reach, but for any intricate behavior emerging from the collective activity of millions of neurons.

FULL PROJECT

More
Understanding neural computations across the global brain

The brain is composed of networks of thousands or millions of neurons, and modern neuroscience is just reaching the stage where researchers can observe most, if not all, of these neurons in living animals. Yet, many of the most fundamental questions in neuroscience still remain unanswered. How does the brain represent sensory information? How do responses to sensory information change when paired with good or bad outcomes? How do the brain’s sensory and motor systems change during learning?

One obstacle to answering these questions is that measurements of the activity of every neuron in the brain generate massive, complex datasets that researchers are only now learning to interpret. We have positioned our team at the forefront of these efforts to capture and interpret such complicated neural data. We are developing an open-source computing library for interpreting the large-scale data produced by whole-brain recordings in larval zebra fish. These computing strategies will incorporate cutting-edge technology from disparate fields such as machine learning, data mining, and distributed computing. In addition, this combination of experimental data and computational analysis provides a platform for fruitful collaboration between experimentalists and theorists. Our work will be particularly attractive to theoretical neuroscientists because brain simulations can be directly compared to the data: Since the recordings are from the entire brain, there is no need to make assumptions about neuronal activity. Such a partnership promises to bridge the gap not just between theory and experiments, but also to provide fundamental insights into the brain’s biology.

FULL PROJECT

More
Understanding visual object coding in the macaque brain

When we perceive a face, we see multiple aspects, such as its location in space, its overall 3-D shape, and the color of the lips and eyes. We know that these different attributes are represented in distinct parts of the brain. But how are they bound together to form a coherent whole? That is, how do we create a visual object out of a barrage of sensory stimuli?

This so-called “binding” problem is one of the central challenges to understanding how large groups of neurons work together to encode information. Studying object perception is also a way to study another central challenge of neuroscience: how internal states of the brain give rise to perceptions. For example, when one sees a face and then a vase in the famous face-vase illusion, nothing is changed in the sensory stimulus; yet, what we perceive changes. This change must be the result of changes in the state of the brain that are independent of sensory input. Working in monkeys, we will train them to recognize a specific object in an otherwise cluttered visual scene. By using state-of-the-art techniques for recording and manipulating the activity of neurons, we can study visual object formation at the level of neural circuits. With this experimental setup, our data will provide insight not only into object formation, but also into the fundamental problem of binding. Given the similarity between monkey and human brains, we expect these insights to apply to humans as well.

FULL PROJECT

More
Whole brain calcium imaging in freely behaving nematodes

How does the collective activity of individual neurons in the brain generate actions? This most fundamental question in neuroscience remains unanswered in part because we lack methods to record from every neuron simultaneously in an animal that is allowed to move about freely. We propose to start small by developing a new instrument that will allow us to record from every neuron in a microscopic worm’s brain as it freely moves about.

This worm, known as C. elegans, only has 125 neurons in its head. This is opposed to rodents or humans, which have millions or billions of neurons, respectively. To record from the worm’s brain, its neurons will be genetically engineered so that they emit flashes of light every time they are active. We are developing a new kind of instrument—a sophisticated microscope—that can both record the activity of every neuron as well as track the animal’s movements in real-time. Developing this microscope will require a multidisciplinary effort that draws upon techniques from physics, electrical engineering, computer science, and molecular biology. Our technology will lead to the first brain-wide models of how neural activity leads to actions. We are starting small, but the work here in C. elegans will serve as a launching pad for further investigations into the mammalian brain, and, one day, this tiny worm may even shed light on how the human brain works as well.

FULL PROJECT

More
Advancing Research in Basic Science and MathematicsSubscribe to SCGB announcements and other foundation updates