Credit: Peter Diamond

Thinking on the Go: Why Does the Whole Brain Light Up for Just the Smallest Movements?

Seemingly meaningless movements — even a quick twitch of a limb — produce abundant, widespread and high-dimensional activity in the cortex. How might the brain use this information?

As I sit here trying to decide how to begin this story, knowing how important a good first sentence is, I’m pressing my right index finger to my pursed lips, rubbing the stubble below my mouth, gently tapping the nape of my neck, and rhythmically stroking my nose. Suffice it to say: When I think, I fidget.

These movements are top of mind because of new research that suggests fidgeting might be much more than simple restlessness. Anne Churchland and her colleagues at Cold Spring Harbor Laboratory have shown that fidgeting — in mice, at least — activates not just the regions typically associated with movement, but the whole brain. A brief, apparently meaningless whisk or kick of a hind limb evokes a burst of neural activity over the entire cerebral cortex.

It’s provocative news for the world’s finger drummers, pen tappers, pacers, doodlers and chin strokers. But the findings also feed into a growing appreciation across the systems-neuroscience community that neural activity representing movement is a widespread and important signal in the brain, which needs to be better understood.

“It’s for sure made me wonder if for certain organisms, including some humans, part of what it means to think is to move,” Churchland says. “That movements and cognition for those subjects are deeply intertwined.”

The discovery was a surprise for Churchland. Simon Musall, a postdoc in her lab, was using widefield calcium imaging to observe activity across the whole dorsal cortex as mice learned a decision-making task. The researchers expected to see a handful of distinct cortical regions light up as mice learned to press left or right for a reward according to a visual or auditory stimulus.

Widefield activity map
Uninstructed movements account for the majority of the variance in trial-to-trial neural activity. Musall et al Nature Neuro. 2019

“But what we actually saw was very different,” says Churchland, an investigator with the Simons Collaboration on the Global Brain (SCGB). “Many, many brain structures were engaged — many more than we anticipated and to a much greater extent than we anticipated.”

Having tracked the animals’ every movement, the researchers discovered that movement-related neural activity accounted for the majority of trial-to-trial variability in neural responses — not just in the motor and somatosensory cortical areas, where fidget-related activity might be expected, but all over the cortex. Neuroscientists have long written off such variability as noise. But Churchland’s work, published in Nature Neuroscience in September, says a significant chunk of it is actually signal.

The study follows a paper published in Science in April by Carsen Stringer and Marius Pachitariu, working in the labs of Ken Harris and Matteo Carandini at University College London, which described a similar phenomenon. They found that a mouse’s facial movements accounted for a significant amount of population activity in its visual cortex. Further experiments showed this held true across the brain.

The new studies add to an expanding body of research exploring how an animal’s ongoing behavior profoundly influences the ways its brain processes sensory information and makes decisions. Previous research had mainly shown how single variables — such as running speed or pupil diameter (an indicator of arousal) — could account for changes in activity in individual sensory cortices. This newer work shows both that movement-related neural activity is broadcast across the whole brain, and that the signals are more complex than previously described.

The findings are challenging neuroscientists to reconsider how cortical processing operates and to develop new models that incorporate ongoing behavior and the entire cortex. “You have moment-by-moment information about what you’re doing across the whole brain,” Stringer says. The question this poses for future research, she says, is: “What does the brain do with that information?”

The two studies also raise important issues for how in vivo neurophysiologists design and interpret experiments. If even small, spontaneous movements account for significant amounts of neuronal activity, how can this insight be harnessed so researchers can better interpret what the brain is doing in any given instance?

Rising dimensionality

How surprising you find these results depends on your starting point, says Alex Huk, who works on primate visual processing at the University of Texas at Austin. “If you work off of 1960s first principles of functional specialization and brain modularity, these would be really, really surprising,” he says. In that classical — and enduring — model, signals related to motor behavior would be expected to show up only in specific regions — namely, in motor, premotor and somatosensory areas.

Classic neuron tuning curve
A classical tuning curve: For a typical Hubel and Wiesel neuron, activity can be neatly plotted as a function of a single parameter of an external stimulus. Hubel and Wiesel, 1968.

Those principles are epitomized by the work of David Hubel and Torsten Wiesel, who identified neurons in the visual cortex that encode specific features of visual scenes, such as the angle of a line. Today’s neuroscientists would call this a one-dimensional representation. This and subsequent research in other areas of the brain suggested that discrete cortical areas encode discrete types of information — thus, tonal pitch is encoded by auditory neurons, and position in space is encoded in the hippocampus. Neurons whose firing was well described in low-dimensional terms, and therefore easily understood, took the limelight.

Much sensory processing work in the decades that followed used anesthetized animals, and primate cognition research often used extensively trained, movement-restricted animals. The goal was to isolate the process of interest and minimize noise from extraneous variables.

Such experiments were staples of Michael Stryker’s early career studying visual cortical processing beginning in the 1970s. But in the 2000s, Stryker, a neuroscientist at the University of California, San Francisco and an SCGB investigator, instigated a study with his then postdoc Cris Niell where they looked at the cortices of awake mice. They held the animals’ heads still in front of screens displaying classical visual stimuli but placed the mice on treadmills that allowed them to walk or run.

This work, published in Neuron in 2010, revealed how profoundly the animals’ behavior affected sensory coding. Qualitatively, neurons responded to visual stimuli similarly to those in anesthetized animals. But overall levels of neuronal firing varied hugely, and “locomotion accounted for a large fraction of variability in responses,” Stryker says. In fact, cortical neurons spiked two to three times more in response to the same stimulus when the animal ran compared to when it was stationary.

Since then, more and more groups have looked at sensory processing in awake, behaving mice, and researchers are increasingly using virtual reality systems where a behaving animal’s sensory experience can be well controlled and determined in real time by its own movements. The impact of locomotion and behavioral state has varied in different sensory modalities, but the overall message has been consistent: What an animal is doing profoundly influences how its brain processes incoming information.

Stryker says the Stringer and Musall papers take this core notion a step further. They show that it’s not just single parameters, such as locomotor speed, that modulate activity in areas once thought to be purely sensory. Rather, all bodily movements are reflected in the activity of neurons in sensory cortices and beyond.

The full orchestra

The new findings were made possible by cutting-edge behavioral monitoring technologies and large-scale neural recordings, as well as innovative methods for analyzing and interpreting the very large datasets those technologies generate.

The Harris and Carandini groups used calcium imaging to monitor the activity of more than 10,000 visual cortical neurons per recording session, first using mice walking on a treadmill with little or no visual stimulation. They saw that activity was partially determined by the animal’s speed, pupil diameter and whisker movements. But by using a novel analysis method designed to identify inherent structure in the data, they found that visual cortical activity was very high-dimensional — many hidden variables accounted for the overall population activity, and more than 100 had individually sizable effects.

Plot of neural activity and predicted neural activity
Spontaneous behavior (top) predicts a significant fraction of neural activity (bottom). Stringer et al Science 2019

Uncovering this structure, Stringer says, “we thought maybe there’s more dimensions of behavior that might explain this activity.” They trained a video camera on the mouse’s face and neck muscles, did a principal component analysis of the movement data, then asked whether the extracted dimensions were represented in visual cortical activity. They found that the visual cortex encoded at least 16 dimensions of motor information, with facial movements predicting roughly one-third of the neural activity. When the group gave the mice basic visual stimuli, they found that the same neurons encoded both visual and behavioral information.

Next, their colleague Nick Steinmetz, now at the University of Washington, recorded electrical activity from roughly 2,000 neurons across a number of cortical regions, plus the midbrain, thalamus and striatum. Again, the video data predicted a significant fraction of neuronal activity in each region — demonstrating that high-dimensional representations of behavior aren’t a quirk of the visual cortex.

Inspired by early news of Stringer and Pachitariu’s findings, Churchland’s group added video monitoring to the pressure sensors and other devices they already used to track limb movements during experiments. They similarly used machine-learning analysis to extract principal components of the overall movement data stream. But in contrast to Harris’ and Carandini’s groups, they monitored movement-related neural activity in mice that were engaged in a cognitive task and distinguished between the effects of task-related movements and fidgety, spontaneous movements.

And rather than only looking at the average response to multiple trials, they worked at the level of individual trials. This revealed that a single fidget could illuminate the entire cortex and that movement-related activity accounted for the majority of intertrial variability in neuronal firing. Indeed, fidgeting made a slightly bigger contribution than task-related movements.

As was the case for the Stringer study, the same neurons encoded different types of information — individual cells could be primarily modulated by movement, decision-related factors or some combination of both.

“These studies are reminding us mice are in constant motion,” says Megan Carey, a neuroscientist at the Champalimaud Centre for the Unknown in Lisbon who uses sophisticated behavioral analysis to investigate the cerebellum. Although she has long advocated using behavioral measurements in neurophysiological work, she’s fascinated by the breadth and complexity of the movement-related signals Churchland’s and Harris’s groups have described. She was particularly struck by the observation that the same neurons that process sensory inputs or the demands of a cognitive task also explicitly represent movement. “That didn’t have to be true,” she says.

Moving forward

Carey says the new studies form part of a larger, ongoing trend toward more effectively incorporating behavioral measurements into neurobiological research. The ability to move in complex ways is a defining characteristic of animals. And a body’s position in space and what it is doing are central to determining how sensory information is received, interpreted and deployed. In this light, it’s perhaps not surprising that echoes of movements are found across the brain.

As an example of how the new findings might recast a scientific problem, Stringer cites the case of a predator visually tracking prey. This is a goal-directed behavior, she says, “where you really have to know where your body is and what you’re doing.” In old modular models of cortical function, signals would have to be constantly relayed back and forth between sensory and motor areas. But if the visual cortex contains a real-time, high-dimensional representation of all ongoing body movements, it reframes the possible ways a predator might stalk its prey.

Huk agrees, saying the new data raise profound conceptual questions that require neuroscientists to think big again — “to figure out what the new model is and what the new hypotheses are.”

If new models in some way reject the idea of very functionally specialized, very modular brains, they won’t invalidate Hubel and Wiesel’s work. But they would suggest that neurons whose activity can be described by very low-dimensional statistics (such as an edge detector) may need to be accommodated in a new framework where higher-dimensional representations are the norm.

As new schemas are developed, scientists will also need to determine if they hold for all types of brains. Some researchers believe that in primates, the boundaries between cortical regions are more tightly demarcated than in rodents, and that more sophisticated, more specific processing occurs within each region. It’s possible that the integration of sensory, movement and cognitive signals across the entire cortex is limited to smaller brains, such as rodents’, which have fewer cortical regions and leakier borders between them.

Stryker, acknowledging certain key differences between the visual systems of rodents and primates, says he’s spent a decade trying to convince primate researchers to repeat his 2010 study. His co-author Niell — now at the University of Oregon — is collaborating with Huk to do just that. They’re investigating locomotor effects on marmoset vision in a virtual reality setting but haven’t yet reported results.

Video tracking of a mouse's three movement variables
Video tracking of three movement variables. Musall et al Nature Neuro. 2019

The Musall and Stringer papers also have practical implications for the design and analysis of neuroscience experiments. Churchland says all in vivo physiologists should now consider using video to track the movements of their experimental subjects.  By demonstrating that extensive movement-related neural activity rides atop the activity underlying cognitive demands, even in restrained, expert mice — and by developing methods for subtracting out that activity — Churchland’s study indicates ways to better isolate how brains mediate cognition. Her group accomplished this using unsupervised analysis of movements, and she says easy-to-use tools are now under development for widespread use.

“You want to know the extent to which movements are driving neural activity and then account for that when you interpret the neural responses,” Churchland says. As this work evolved, she says, they learned that neurons that initially seemed cognitively driven were, in fact, modulated by movement and vice versa.

Stringer believes that as this work evolves, the relationship between seemingly random movements and internal states needs to be further investigated – whisker twitches, for instance, may be intrinsically linked to being in a higher state of arousal associated with exploration.

She also highlights the fact that widespread behavioral signals may challenge researchers to reinterpret older work. In particular, she is concerned about claims of neural plasticity derived from recordings of visual cortical activity in awake, behaving mice. In studies where animals learn to attend to a particular visual stimulus, experimenters have observed changes in the ways visual cortical neurons respond to that stimulus — and attributed them to intrinsic neural plasticity. But because an animal’s behavior so strongly modulates sensory cortex activity and animals in those studies started to respond differently to the stimulus, neural changes may simply reflect that behavioral change, not intrinsically plastic visual coding.

A final intriguing finding in Churchland’s study was the way fidgeting patterns changed over time. Neuroscientists have often voiced concerns that animals trained extensively on cognitive tests become borderline automatons. But in the study, even expert mice engaged in a rich array of movements.

Most provocatively, as the mice learned the cognitive task, some of their seemingly meaningless actions became increasingly time-locked to aspects of the task — an observation suggesting that these movements or the neural activity they evoked, or both, might have contributed to the learning process.

“Creatures evolved to have a brain to move the body, and cognitive tasks probably borrowed neural machinery from the movement systems,” Churchland says. She believes researchers should revisit the intersection between movement and thought and figure out how exactly the two are linked.

It’s a task that may leave neuroscientists scratching their heads. Or tapping their fingers, pacing the room or clicking their tongues — whatever helps.

Recent Articles