The State of Computational Neuroscience

Adrienne Fairhall, a computational neuroscientist at the University of Washington, reflects on how the field has evolved and on the biggest issues yet to be solved.

Computational neuroscience is one of the most rapidly growing subfields in neuroscience. New analysis and modeling techniques are urgently required to make sense of the reams of data produced by novel large-scale recording technologies.

In its October 2017 issue, Current Opinion in Neurobiology explored whether the field is living up to this “grand challenge.” Editors Adrienne Fairhall of the University of Washington in Seattle, and Christian Machens of the Champalimaud Foundation in Lisbon, Portugal, “invited authors to share recent contributions and perspectives that demonstrate the application of theory and modeling in the analysis of systems, and the formulation of new statistical tools to capture structure in neural data and in behavior.”

Adrienne Fairhall, a neuroscientist at the University of Washington and member of the SCGB executive committee.

The result is a concise tour of some of the most important problems in computational neuroscience. “The issue serves as a useful introduction to the growing need for collaboration between experimentalists and theorists,” says Gerald Fischbach, distinguished scientist and fellow at the Simons Foundation. “There is a clear emphasis on the intellectual frontiers to be addressed and the practical questions in brain science that will benefit. It will be of great educational value to 99 percent of working neuroscientists.”

It’s the second such issue that Fairhall, an investigator with the Simons Collaboration on the Global Brain (SCGB) and member of its executive committee, has put together for the journal. SCGB spoke with Fairhall about her perspective on the state of computational neuroscience and how the field has evolved since the previous special issue she worked on was published in 2014. An edited version of the conversation follows.

Why is computational neuroscience such a timely topic?

As we and many others have said, with the amount of data being recorded continually increasing, along with the complexity of that data, we really need theory and analysis tools to handle it all. This field has also become important globally, not just in neuroscience but in industry, where AI (artificial intelligence) research is on the rise. Everyone suddenly cares about this a lot.

What have been the biggest changes in computational neuroscience since you published the 2014 special issue?

Some of the themes that started there have become more mainstream — using recurrent neural networks to build models, for example. Larry Abbott, Krishna Shenoy, Mark Churchland, Bill Newsome and David Sussillo were early adopters, and now the idea is becoming much more central.

While I was writing the introduction, I realized how I wanted to frame this development in the context of the history of the field. A lot of the classical theories of neuroscience were based on attractor solutions to neural networks. An attractor solution is a state the network tends to settle into. Having point attractors, for example, means that the network settles into specific configurations of firing rates. That idea has been critical for thinking about systems that do integration; Mark Goldman’s SCGB work is one example.

But attractor solutions represent just one aspect of what neural networks can do. In fact, that’s a rather simple case of what they can do. Neural networks generate complex dynamics and don’t require attractor solutions to carry out their job. We don’t need to wait for recurrent neural networks to get to a solution, but rather can take advantage of their dynamics to perform a task. Given an input, can my network generate particular specified output? You are using the richness of the internal dynamics in a broader sense than simply analyzing attractor structure.

The hope is that if we can understand how such a network that is trained to do a task is solving the problem, we can extract some kind of computational principle. I think that’s where the field is going — using data to fit neural networks, which then serve as a computational tool to extract meaning. The first great example was from Sussillo and Newsome, who looked at a complex decision-making task. This is something people are now trying more broadly — Mehrdad Jazayeri’s SCGB project training both animals and networks to remember and reproduce a time interval is a great example. This approach is a key conceptual advance in the field that is gaining interest and traction.

Are there drawbacks to this approach?

Neural networks are extremely powerful and can help us extract high-level principles of computation. But I think it’s important to maintain a foothold in the world of real biology. Neurons have specific biophysical properties. If we really want to solve problems in the brain, what we have access to now, in terms of genetic manipulations or pharmaceuticals, are those biophysical properties. In this exciting new domain of high-level neural network models, you don’t care about the exact properties of individual cells. I think it’s necessary to be able to link those two levels. What are the implications for biophysics in governing neural network dynamics? For example, how does dopamine affect properties of individual neurons? The new issue includes an article about dynamics in Parkinson’s disease, which emphasizes how individual neuron properties really matter in the outcome of the modeling. In selecting topics for the issue, I wanted to highlight that contrast, to span more abstract modeling with cases of biophysical level modeling that have clinical implications.

What are some other developments in the field that you’re excited about?

One of the other things I have been interested to see is the story of dopamine evolving over time. The idea that dopamine is simply a signal of reward prediction error is worth revisiting. The story is more complicated. (For more on dopamine and reward prediction error, see “How Practice Makes Perfect: Dopamine Clues From a Songbird” and “Dopamine Cells Influence Our Perception of Time.”) For example, Ilana Witten, also an SCGB investigator, has shown that different dopamine neurons carry different kinds of signals. Others are also looking more deeply into dopamine’s role — in the dynamics of mushroom bodies in the fruit fly (the SCGB project “The Representation of Internal State in the Fly Brain”  explores this issue) and in birdsong (“A Dopamine Dial for Controlling Neural Variability”), for example. Revisiting the complexity of signals that dopamine carries and the algorithms for learning that these signals presumably support is an emerging theme.

Is there an alternative model at this point?

Not yet; we are still gathering data. One of the papers in the issue discusses what we should be looking for. It’s a frontier area right now.

What are some of the big open questions in the field?

An active and important area is the fitting of high-dimensional data. There’s a lot of interesting activity, but it’s still pretty open. What are the right ways to analyze things? Larry Abbott, David Sussillo and Memming Park have been doing nice work using recurrent neural networks to fit coding models that build in an explicit dynamical system. That raises an interesting tension between looking at things from a statistical description perspective and from a dynamical systems perspective. The extent to which we get models at the end that are comprehensible is up in the air.

A lot of work I and others have done in the past tries to extract coding models of data — for example, to try to fit a receptive field to predict an output. With these emerging methods to analyze high-dimensional data, rather than fit a receptive field, you train a randomly connected recurrent network to produce a certain kind of output. It’s different than a simple receptive field model. You often get more accurate predictions of what the system will do. But maybe you’re giving up an intuition about what’s going on, so we end up building network solutions that we don’t really understand. The paper I wrote for the 2014 issue was perhaps an expression of anxiety about that development. 

What advice would you give students just getting into this field?

There are a lot of theory jobs open now, and I’m sure a lot of departments are having discussions about what kind of people to hire. The big companies — Google, Facebook, Amazon — are also hiring people with a computational neuroscience background. Going to work at Facebook and Google is something that excites students, and it should. There’s a lot of fundamental neural network research going on there. Although this is really exciting and a fantastic time of opportunity, I do hope it doesn’t lead to a complete dominance of machine learning at the expense of everything else. There is room for a variety of approaches, and an interaction among different ways of thinking has been really powerful for the field in the past. My advice is not to glom on to industry research too soon. Take advantage of working with real systems to gain insight into biology, to get a deeper insight into neural algorithms and the brain’s complexity. Stay open-minded about discovering new ideas about computation from biology.

What do you hope to see in the field over the next few years?

I hope there continues to be a flow of ideas from basic neuroscience into industrial research. For example, I believe that the role of neuromodulators or the subtleties of dopamine’s function are principles we can ultimately extract and hopefully build into algorithms for solving problems like one-shot learning.

Recent Articles