In Search of the Cogwheels of the Brain

Mehrdad Jazayeri says researchers need to think carefully about what it means to do causal neuroscience.

Mehrdad Jazayeri
Mehrdad Jazayeri

Powerful new tools, such as optogenetics, have given neuroscientists the ability to manipulate the brain with unprecedented precision. Researchers can silence and activate subsets of cells and analyze the resulting changes in behavior. But these sophisticated techniques raise important questions, says Mehrdad Jazayeri, a neuroscientist at the Massachusetts Institute of Technology and an investigator with the Simons Collaboration on the Global Brain (SCGB). In a paper published in Neuron in March, Jazayeri argues that the field needs to give critical thought to what it means to do causal neuroscience.

Neuroscience has long focused on correlational studies, in which scientists record brain activity and behavior and look for links between the two. But techniques like optogenetics allow experimenters to directly perturb neural activity. “With these functional assays, we try to figure out causally what different parts of the brain do,” Jazayeri says. “Causal analysis is harder. It brings up new challenges.” He argues that scientists must consider the right level — be it proteins, cells or circuits — to perturb in a particular experiment. In March, Jazayeri discussed his ideas with SCGB. An edited and condensed version of the conversation follows.

SCGB: What was the inspiration for the Neuron paper?

Mehrdad Jazayeri: I was speaking at a Cosyne workshop two years ago that my co-author, Arash Afraz, put together on causal and correlational neuroscience. Most people gave talks about how great causal neuro has been — and is. My talk was a bit different and caused a lot of debate. It’s supercool that we can perturb parts of the brain and look at what happens. But it can also generate results that are hard to interpret. Given the vigorous debate and everyone’s deep interest, we decided to turn the discussion into a paper.

Do people use these two terms — causal and correlational — improperly?

I think people know what the terms mean, but it’s important to clarify. In the general sense, if you show an animal a stimulus and record from parts of the brain, that’s a causal study. Any experiment where some parts of the system are made to take on a particular state, randomly with respect to the rest of the system, that’s a causal perturbation. People who read the article said it’s helpful in clarifying what they think but hadn’t put into words.

You draw a comparison between the brain and a mechanical clock. Taking apart the clock’s system of cogwheels can help us understand how the device functions. But disassembling the cogwheels into smaller components doesn’t make much sense. How does this idea apply to the brain?

That’s an important notion that has helped me and others to think more deeply about what it means to do causal perturbation. What’s valuable about the cogwheel analogy in a clock is that it highlights the need to think about building blocks that serve a coherent function on their own. The same argument wouldn’t be true for a few atoms of the cogwheel. That’s an important point that can get muddied in causal neuroscience. We come up with tools that can target cell types or genes or proteins, not thinking about whether the target is part of the cogwheel or the cogwheel itself. In neuroscience, we need to interface with the system at the right level so that perturbations interfere with the function of the system in a meaningful way.

How can we determine the equivalent of cogwheels in the brain?

We provide a road map for trying to figure that out. When we can formulate straightforward hypotheses for how perturbations ought to alter behavior, if reality follows our prediction, we are probably interacting with a cogwheel or more than one cogwheel. When perturbations are too detailed and we can’t predict what happens — for example, if you knock out a gene and the animal engages in some strange behavior — it’s likely we’re hitting the system at a sub-cogwheel scale. No one knows all the various scales of the cogwheels in the system, but it’s important to take successes and failures seriously.

Do neural cogwheels vary under different conditions?

The complexity — or rather, dimensionality — of the cogwheel is highly dependent on a particular task. Some behaviors are instinctive and simple. They might have tiny cogwheels that depend on a particular group of cells or a few genes. But for other behaviors, such as memory and anticipation, the cogwheel might involve large circuits. In that case, it’s not a particular cell type that’s most relevant, but rather a whole circuit.

No one knows what the right scale is. I can study behavior from the level of dendrites, someone else at the circuit level, and another person at the whole-brain level with an MRI machine. All of those scales matter for understanding a behavior. But the scales that provide a language one can use to predict computations relevant to behavioral tasks, those are the ones that I’d consider the cogwheel of the behavior. And that will vary depending on the species and the task. In a small system, such as C. elegans, which has 302 neurons and a limited behavioral repertoire, it may be the case that small combinations of neurons can predict certain types of behavior. But the cognitive tasks that people study in primates might require understanding that transcends the level of neurons.

You emphasize that it’s important to characterize neurons’ normal behavior. Why?

Those of us doing perturbation studies need to know what neurons do normally, via thorough large-scale recording of intrinsic neuron responses. For example, say neuron A is typically quiet when neuron B is active. If we now make both neurons active via perturbation — something that doesn’t normally happen in the system — and some strange behavior emerges, it’s difficult to interpret the outcome.

The collective activity of neurons can be thought of as points in a large coordinate system, and the ways in which they are organized and coordinated is sometimes loosely called an intrinsic manifold. The more a causal experiment moves away from the intrinsic manifold, the harder it gets to make strong inferences. Say a particular behavior is well described by 100 neurons in a specific brain area. Those 100 neurons work together to generate a behavior. Then say you turn off 10 neurons within that system that don’t do anything independent of the other ones. Turning off those cells might trigger a change in behavior. But that change might be difficult to interpret since it doesn’t happen naturally.

Has the field resisted these ideas?

Thinking hard about these issues can only help. We aren’t criticizing perturbation studies or taking sides with a particular technology. We are saying that experimenters need to be open-minded about scale — understanding might come at a mesoscopic scale rather than a microscopic one. Tools that target cell types are great, but not all cogwheels operate at the level of cell types. Being too dedicated to our tools is a dangerous thing.

 

Recent Articles