Credit: Lucy Reading-Ikkanda/Simons Foundation

Computational Model Uncovers New Insights Into How Our Brains Store Information

With the new model, researchers reveal how memories and learned behaviors can remain strong, even in the face of shifting neural representations.

Scientists’ understanding of how memories are stored is undergoing a seismic shift. Until recently, researchers thought that memories were tied to specific neurons and the synapses that connect them. But surprising experimental observations in the past decade have pointed to a new idea called representational drift, which posits that the neurons responsible for certain learned behaviors are not set in stone but continually changing — a radical departure from the established paradigm.

“Representational drift is so unexpected and paradoxical,” says Dmitri Chklovskii, a group leader at the Flatiron Institute’s Center for Computational Neuroscience, or CCN. “Why would the brain use something that’s always in flux to store memories long term?”

While researchers dismissed early experiments, which were limited in scope, evidence of representational drift started to pile up, and scientists began searching for the mechanisms behind it. Now, a transparent computational model developed by a group of researchers from the Flatiron Institute and Harvard University has shown how representational drift might occur. The new findings, published in Nature Neuroscience in January, mark a big step forward in understanding the surprising ways memories and learned behaviors are stored in our brains.

“It has become increasingly clear in recent years that neuronal representations are in flux,” says Chklovskii, a senior author on the paper. “Our research gives a possible answer to how this can work.”

Traditionally, neuroscientists believed that when we learn something new, pathways between neurons form to hold that information. Every time we tap into that knowledge, say, to navigate around town or identify the familiar scent of cinnamon, those neurons will fire in a particular, predictable way, forming what scientists call a neural representation.

But in the past decade, scientists equipped with multichannel electrodes and genetics-enhanced optical methods realized this isn’t really how our brains work. In experiments with mice, researchers found that the neurons responsible for tasks like identifying smells or navigating a maze change over time. One day it could be one set of neurons smelling roses, and another day it could be a completely different set.

“The basic idea is that the focus is not on what an individual cell represents,” Chklovskii says. “It’s the relationship between cells that’s important.”

 

Making models

To understand the mechanism responsible for representational drift, the Flatiron and Harvard team took advantage of so-called similarity matching networks previously developed by Chklovskii’s group. These networks are derived from principled objectives and are trained using Hebbian and anti-Hebbian learning rules found in biological systems. The researchers used their model specifically to explore representational drift in the hippocampus and the posterior parietal cortex — regions of the brain where neural representations have been experimentally shown to exhibit drift over days and weeks.

When the brain forms a representation, or encoding of the outside world, it arbitrarily makes it in one of many different but equivalent ways. The model developed by the Flatiron and Harvard team emulates this by allowing many different solutions to make a representation, which can then change over time. In the model, synapses between neurons are weighted, or strengthened, as they are used. This type of Hebbian learning is captured in the common phrase “cells that fire together wire together.”

To model drift, researchers then added ‘noise,’ or random variability, to the synaptic plasticity between modeled neurons. This noise was inspired by experimental observations in biology showing that a synapse may fail to transmit the firing of its neuron to a downstream neuron. Over time, the model showed that this noise caused the neural representations to drift between different solutions.

Importantly, the researchers’ model exhibits representational drift while simultaneously maintaining the similarity of neural activity patterns. This shows how our memories can endure over long periods despite representational drift. The results also matched experimental observations of mice, which showed that populations of neurons involved in a learned task changed over several days.

“Representational drift can be accounted for by a noisy learning process and the existence of equivalent representations,” says Cengiz Pehlevan, a senior author on the new paper and an assistant professor of applied mathematics at Harvard. “If you put these things together, you get representational drift.”

 

A piece of the puzzle

While the model is a promising solution, it has yet to be proved conclusively in a biological system. However, it has several testable outcomes. One is that neurons whose synapses turn over faster should drift more rapidly. This could be tested by measuring the lifetimes of synapses and the long-term behavior of neurons that have learned a task.

While these new findings describe how representational drift occurs, why it happens is still a wide-open question. It could be fundamental to the way our brain works, or it could be an unintended consequence of noise. Perhaps it is the only way to preserve memories as old neurons die and are replaced.

“The new model is an important piece of the puzzle,” Chklovskii says. “But we’re still a long way from understanding how the brain works.”

Looking into the why is Chklovskii’s next goal. What he and his collaborators find could help answer some of the most fundamental questions about the brain. It might also help researchers develop better deep learning networks, which are loosely modeled on how brains learn. By increasing the understanding of our own brains, we might also be able to create better artificial intelligence.

“More advances like this one could lead us to an algorithm that would emulate the brain,” Chklovskii says. “Drift would be one valuable part of a larger algorithm.”

Recent Articles