Brain’s Hidden Depths Explored in SCGB Meeting

New insights into how the brain makes decisions, recognizes novel objects and controls attention were among many areas discussed at the second annual gathering of the Simons Collaboration on the Global Brain

scgb_amdora
Dora Angelaki, SCGB executive committee

What motivates us to climb a mountain? How do we decide whether to buy a compact car or an SUV? What role do our memories play in such choices? These inner workings of the brain have traditionally been difficult to study. But new technologies are finally allowing us to advance on this long-inaccessible frontier. The Simons Collaboration on the Global Brain (SCGB), a grant program launched in 2014, seeks to discover how neural activity gives rise to the most mysterious aspects of cognition by pairing new technologies with powerful computational and modeling techniques.

In September 2016, SCGB investigators and fellows convened in New York City to share their findings at a two-day conference. Just two years into the collaboration’s work, they have already made significant progress. We outline here a selection of the more than 30 presentations at the meeting, exploring topics ranging from decision making to memory to movement.

Decision making

Making a decision, be it crossing the street or eating a second piece of cheesecake, requires integrating different types of sensory information. Is that car far enough down the road for me to make it safely across the street? How sure am I about that estimate? SCGB researchers are developing new approaches to determining how factors such as confidence and context influence how the brain makes decisions.

Neuroscientists typically study decision making by recording neural activity as an animal performs the same discrimination task over and over again. They then average the results across different trials, smoothing away the noise inherent in neural activity. Although that approach has been informative, it wipes out some important information. We sometimes waver or change our minds in the midst of a decision. What happens in the brain during that process? “If you average over a bunch of trials, you lose some of the interesting dynamics of how a decision is reached,” says Bill Newsome, an experimentalist at Stanford University.

However, the noisy nature of neurons makes it challenging to decode meaningful neural activity produced during a single trial. Changes in neural activity could reflect simple noise or an important transition in the animal’s state of mind. Simultaneously recording from many neurons can help scientists interpret these changes. If lots of cells alter their firing patterns at the same time, “it indicates some underlying change in the state of the system,” Newsome says.

scgb_amsahani
William Newsome, Maneesh Sahani and Krishna Shenoy, SCGB investigators

Newsome, Krishna Shenoy and their collaborators have developed an algorithm, or decoder, capable of detecting these changes of state during single trials. Researchers record the neural activity of monkeys as they choose whether a collection of flashing dots is moving predominantly left or right. The decoder algorithm analyzes neural activity to produce a ‘decision variable’ that predicts the monkey’s choice. A high positive number might indicate that the monkey plans to select leftward motion, while a high negative value might indicate right.

Most previous studies using single-trial decoders store the data and analyze it after the fact, predicting the monkey’s decision long after the experiment is over. But the Newsome team’s algorithm works in real time. “The really novel thing we did was combine studies of decision making with fast algorithms developed in the Shenoy lab over the last 10 years for real-time readout of the neural state,” Newsome says. “This has not been done before — to read out the decision variable in real time and test how well you can predict the monkey’s choice based on an instantaneous measure.” The algorithm they used isn’t particularly complicated, Newsome says. “It’s a simple logistic regression, the most vanilla decoding algorithm you can apply.”

This real-time approach grants access to an internal state of mind that has traditionally been difficult to study: what’s happening in the brain during active decision making. Researchers found that the decision variable fluctuated during each trial, as the monkey thought about whether the dots are moving left or right. To determine whether these fluctuations were meaningful, the researchers abruptly stopped the trial when the decision variable hit a certain threshold value, forcing the monkey to make a decision. They found that the algorithm correctly predicted the monkey’s choice most of the time, even when the decision variable was relatively small. This suggests that the fluctuations on single trials are indeed meaningful indicators of the animal’s internal state rather than simple measurement noise. Moreover, larger fluctuations predicted the instantaneous decision more accurately.

The researchers also analyzed trials in which the monkey appeared to change its mind — when the decision variable flipped from positive to negative or vice versa — using the same threshold values of the decision variable to stop the trial following a putative change of mind. The algorithm performed a bit worse on these trials, suggesting that the monkey used additional information to make decisions in these cases. The researchers speculate that the additional information may reside in the speed with which the decision variable reaches the threshold value following a change of mind. They are currently testing this possibility.

Neural representations of internal states

How does the brain hold information, such as a phone number, in short-term memory? One theory is that a group of neurons representing the memory fires consistently for as long as that memory is in place. But so far, that idea does not match experimental observations. Instead, “activity fluctuates all over the place,” says Shaul Druckmann, a neuroscientist from the Howard Hughes Medical Institute’s Janelia Research Campus in Ashburn, Virginia.

scgb_amdruckmann
Shaul Druckmann, SCGB investigator

Druckmann proposes that this fluctuating neural activity actually represents a combination of multiple signals. “Maybe there is a signal that’s constant in time, but we don’t see it because we are looking at a summation,” he says. For example, a group of five neurons could produce one pattern of activity that’s relevant for a short-term-memory task and another that isn’t relevant for that particular task. (He refers to these as coding and noncoding patterns, respectively.) The sum of these activity patterns would fluctuate.

Druckmann is careful to note that the coding versus noncoding terminology refers to patterns of activity within the same set of neurons rather than from two different groupings of neurons. “A single ensemble can have many patterns of activity at the same time, and we have to tease them apart,” he says. “A few meaningful patterns is not the same as a few selective neurons.”

In a paper published in Nature earlier this year in collaboration with the Svoboda lab, also at Janelia, Druckmann and his collaborators identified coding and noncoding patterns of neural activity as rodents performed a short-term-memory task. They found that coding patterns are particularly robust. The researchers briefly silenced groups of neurons in the premotor cortex using optogenetics. Coding activity quickly rebounded from the temporary freeze, and rodents could still perform the memory task. “No one thought the brain had the ability to recover and make up for lost time,” Druckmann says.

Researchers think this recovery is possible because the brain has multiple copies of the relevant coding patterns — in this case, the patterns are contained in both hemispheres of the premotor cortex. When scientists inactivated neurons in both hemispheres, coding patterns failed to bounce back.

Not all neural activity is resilient, however. The rodents’ brains restored only the coding pattern — the segment of neural activity relevant for the task at hand — not the noncoding patterns. Just as engineers build backup systems for the critical parts of a machine but not for the dispensable parts, the brain seems to ensure that the essential components of neural activity are resilient to damage. “It means that the concept of taking activity and decomposing it into important and non-important parts is not just something we as theoreticians like to do,” Druckmann says. “The brain also respects this principle — it doesn’t bother to correct the parts that aren’t important.”

scgb_amtank
David Tank, SCGB director

The Druckmann and Svoboda labs are now developing optogenetics technology capable of silencing neurons more precisely. In future experiments, they hope to specifically target coding or noncoding patterns and determine whether changing those patterns alters behavior. “We want to push the patterns around a bit and see how they rearrange themselves,” Druckmann says.

Nicole Rust, a neuroscientist at the University of Pennsylvania, is trying to understand a different internal state — how the brain represents novelty and familiarity. Humans and other primates have a remarkable ability to recognize something they’ve seen just once before — even when they have to distinguish it from very similar scenes, by remembering that a loaf of bread was in front of a bread box rather than behind it, for instance. “That suggests that memories have rich information,” Rust says. “Where and how are these memories stored?”

To try to answer that question, Rust and her collaborators recorded neural activity from the inferotemporal (IT) cortex — a brain area essential for object recognition — as animals looked at a series of novel and familiar images. The researchers found that they could predict based on IT activity whether the animal was looking at an image it had seen before, even if that image had appeared several minutes before and was followed by dozens of images.

In general, the researchers found that familiar images triggered a lower neural firing rate than novel ones. Some computational models suggest that familiarity is encoded in the numbers of spikes, a hypothesis that Rust and her collaborators plan to explore. If that’s the case, “the difference between novel and familiar images is reflected by the total numbers of spikes coming out of IT,” Rust says. The identity of an image, in contrast, is likely reflected by the pattern of firing rates across different neurons.

Rust’s team also analyzed how IT responds to images with high or low contrast. While decreasing contrast and increasing familiarity both tended to decrease the overall firing rate, they did so in distinct ways — the familiarity signal arrived in IT later than the contrast signal. One hypothesis for how familiarity signals are formed in IT proposes that these memories are stored within IT via changes in feed-forward synaptic weights. “That model predicts that contrast and familiarity signals should arrive at the same time in IT, but they do not, so we can rule out this simple model,” Rust says. The researchers are now developing and evaluating alternative models.

Neural mechanisms underlying movement and behavior

When you reach for a cup of coffee, your brain both conceives of the movement and triggers a physical action. Disentangling these two activities in the brain is a challenge. Larry Abbott, a theoretician at Columbia University, and his collaborators are using brain-machine interfaces (BMIs) to decouple the representations of mental and physical reach. Abbott’s team used data from experiments in which monkeys were trained to move a dot toward a corner of a cube in a three-dimensional virtual environment. Initially, the animals controlled the dot’s position with their arm. They then learned to move the dot using only neural activity recorded from the motor cortex via a BMI. The arm sits still during the BMI-controlled version of the task, eliminating neural activity linked to muscle movement. “That allowed us to find neural activity related to the more abstract notion of what you’re trying to do,” Abbott says.

Comparing neural population data from both tasks, the researchers found a pattern of neural activity that was the same whether the animal pointed to the target using its arm or its mind. In both cases, neural activity included a geometric representation of the cube from the virtual environment. “The surprise was that there is a more abstract representation of the movement that is present and the same whether or not the arm is moving,” Abbott says. “That suggests that neurons preserve the rough structure of physical space.” Abbott’s team next hopes to map this picture of neural activity onto real neural circuits.

scgb_amfee2
Michale Fee, SCGB investigator

Michale Fee, a neuroscientist at the Massachusetts Institute of Technology, described how brain activity in zebra finches changes as young animals learn their song. Each song lasts about a second and is made up of several syllables. The songs are represented by a sequence of neuronal bursts in a brain area called HVC. Fee and his collaborators wanted to learn how these sequences are set up during song learning.

Young birds learn initially to produce nonsense syllables, which resemble a baby’s babbling and are known as subsong. Subsong gradually becomes more structured, evolving into consistent, 100-millisecond-long sounds known as protosyllables. The birds build their vocabulary by generating new variations of the first protosyllable, eventually creating a full-fledged song.

In research published in Nature last year, Fee, along with then-graduate student Tatsuo Okubo and their collaborators, showed that early in the learning process, chains of neurons in HVC produce rhythmic bursts of activity at the start of each protosyllable. As birds learn their song, spike-timing-dependent plasticity breaks the chain into smaller subchains, which produce syllables in the adult bird. “The sequence that produces the protosyllable kind of splits in half,” Fee says. “It divides in the same way that the DNA of a cell splits to produce two copies during division.” Fee’s team used their neural data to develop a computational model for the splitting process.

Whereas HVC produces sequential neuronal activity, the brain area known as LMAN generates random dynamics. LMAN drives the early babbling of subsong by triggering random activity in downstream circuitry, such as the premotor area known as RA and muscles of the vocal organ. Fee and his collaborators are using multineuron recordings to better understand LMAN’s role in song learning. “We think LMAN generates random patterns of activity that initiate and terminate subsong syllables,” Fee says.

Large-scale recordings from juvenile birds show that many HVC neurons burst right before subsong, but these bursts don’t actually drive the subsong. (HVC is not necessary for subsong in young birds.) “They are simply being activated before those syllables turn on,” Fee says. According to Fee’s model, a copy of the random activity from LMAN is fed back to HVC. This input triggers the neurons that are active right before subsong starts. Fee and his collaborators hypothesize that this incoming activity helps to form the chains of neurons that will eventually produce protosyllables.

Fee also described new experiments in which the team measured sequential HVC activity via calcium imaging. The goal of these experiments is to monitor a large fraction of these neurons as syllables emerge.

Sensory information processing

Marlene Cohen, an experimentalist at the University of Pittsburgh, studies how attention influences sensory processing. Cohen and collaborators have previously shown that attention decreases correlated activity among neurons within specific parts of the visual cortex. For example, when animals switch their attention from outside a cell’s receptive field to inside the receptive field, the cell’s firing rate increases but becomes less synced with that of its neighbors.

scgb_amvikram
Vikram Gadagkar, SCGB fellow

In new research, Cohen’s team discovered that attention has a very different effect on the correlation among different brain areas — attention increases correlations between areas V1 and MT. The researchers are using this pattern to constrain models of how attention influences cortical circuits. Cohen’s collaborator, theoretician Brent Doiron, built a model of a balanced network that captured these features. The model predicts that attentional modulation should be low-dimensional (that is, it doesn’t have complicated effects on the interactions between neurons), and the researchers confirmed this experimentally. The model also predicts that attention should modulate inhibition more than excitation. Experiments to confirm this using intracellular recordings are currently under way.

Cohen and Doiron’s findings support the two main hypotheses for how attention boosts perception. According to one theory, attention may improve how well a group of neurons in the visual cortex encodes visual information. The decrease in correlations within individual visual areas is consistent with this idea, since reducing correlations might reduce redundancy. (Cohen cautions, however, that the relationship between correlations and information coding is complicated.) Alternatively, attention to visual information may enhance its transmission to downstream brain areas involved in decision making. The increase in correlations across areas supports this idea, suggesting that different regions are communicating better when the animal pays attention.

Techniques for large-scale data analysis

A core goal of the SCGB is to develop more advanced and better-standardized approaches for analyzing large-scale neural activity and other data. Researchers at the meeting introduced new, publicly available data analysis software for spike sorting and calcium image processing, as well as state-of-the-art recording electrode technology.

Jeremy Freeman, a neuroscientist at Janelia, aims to help standardize systems neuroscience and encourage collaboration. To that end, he introduced neurofinder, a freely available web-based platform that allows researchers to compare different algorithms for analyzing calcium imaging data. Freeman also gave a preview of spikefinder, a similar platform for the use and comparison of spike-sorting algorithms.

Kenneth Harris, a neuroscientist at University College London, introduced a new state-of-the-art electrode for the rodent brain called Neuropixels. The device features 1,000 recording sites spaced 20 micrometers apart along an extremely thin 1-centimeter-long shaft. The electrode is now being mass-produced and will be available to the community in 2018.

scgb_amduan
Chunyu Ann Duan, SCGB fellow

The ability to record simultaneously from so many sites requires better algorithms for spike sorting — determining which neurons fired and when. In electrophysiology, the only way to distinguish between different neurons is to separate them according to the characteristic shapes of their voltage traces, or spike waveforms. But making that determination can be challenging, because neurons that have similar locations and geometries can produce nearly identical electrical responses, says Jeremy Magland, a data scientist at the Flatiron Institute’s Center for Computational Biology (formerly the Simons Center for Data Analysis). “Noise from distant firing events and other sources makes it difficult to distinguish between these cells,” he says. “This becomes a high-dimensional clustering problem, where the cluster shapes can be quite complex.”

Magland and Loren Frank, of the University of California, San Francisco, presented spike-sorting software, called MountainLab, for analyzing multichannel neural recordings. Magland and Frank say that MountainLab is different from other spike-sorting software in that it minimizes the number of free parameters that must be set by the user. “It is almost completely automatic with very few tweakable parameters, which has implications for reproducibility of findings and standardization between laboratories,” Magland says.

Harris introduced Kilosort, a new, open-source software package that is also used for analyzing multichannel neural recordings. The spike-sorting algorithm, which is based on a generative model of spike waveforms, has already found widespread use within the community. Kilosort and MountainLab were both developed with the goal of minimizing or eliminating the element of human review and judgment, though neither is there yet.

Liam Paninski of Columbia showed three short films illustrating his group’s statistical approach to neural data analysis. Their methods include algorithms for processing calcium imaging data, inferring spike times from noisy multineuron recordings, and dimensionality reduction for a complex task involving two continuous parameters. Compared with existing models, the third algorithm captures a much larger proportion of neural variability with a small number of latent dimensions, providing superior predictive performance and interpretability. For example, the researchers applied the algorithm to neural activity recorded while animals viewed a torus shape. “The new method is able to nicely recover this torus topology from the neural data without any prior knowledge that a torus was the ‘right’ shape here,” Paninski says. “Previous methods weren’t able to recover this structure.”

As techniques for collecting and analyzing large amounts of data from neural populations grow more sophisticated, figuring out which features in the data are meaningful becomes more challenging. How can we better identify truly significant population-level effects in large-scale recordings? Addressing this question, John Cunningham of Columbia described a new method of statistically testing whether the population-level description of a group of neurons truly contains new information beyond what can be understood from the single-neuron level.

scgb_amgiocomo
Lisa Giocomo, SCGB investigator

For data describing N neurons, C conditions and T time steps, a naive method of analysis might be to shuffle the data in some way, then to compare the population-level behavior of the original data and the shuffled data to see whether any interesting effects persist. Cunningham points out that in practice this comparison can be problematic. The shuffled data may not preserve certain ‘primary features,’ such as single-neuron tuning curves and pairwise correlations. This makes it impossible to determine whether changes in the population-level description are caused by the trivial disruption of primary features or by a more interesting effect at the population level.

To deal with this issue, Cunningham developed a method for constructing surrogate datasets that have the specified primary features but are otherwise unstructured. He applied this technique to published prefrontal and motor cortex datasets from subjects performing frequency discrimination and reaching tasks. Using this approach, Cunningham could determine when population-level recordings carried more information than single-neuron recordings and when they did not.

David Tank, Beth Buffalo and Lisa Giocomo presented work on the hippocampus and entorhinal cortex. For more on this research, check out our news story here.

Additional reporting by James Murray

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Recent Articles