Biological Brains Inspire a New Building Block for Artificial Neural Networks

While artificial intelligence systems have advanced tremendously in recent years, they still lag behind the performance of real brains in reliability and efficiency. A new type of computational unit developed at the Flatiron Institute could help close that gap.

An illustration of a neural network being upgraded.
New research is exploring how to improve neural networks using components more like those in real brains. Alex Eben Meyer for Simons Foundation

While artificial neural networks are revolutionizing technology and besting humans in tasks ranging from chess to protein folding, they still fall short of their biological counterparts in many key areas, particularly reliability and efficiency.

The solution to these shortcomings could be for AI to act more like a real brain. Computational neuroscientists at the Simons Foundation’s Flatiron Institute in New York City have drawn lessons from neurobiology to enhance artificial systems using a new type of computational component that is more akin to those found in real brains. The researchers presented their work at the annual conference of the Association for the Advancement of Artificial Intelligence (AAAI) in Singapore on January 23.

“Artificial intelligence systems like ChatGPT — amazing as they are — are, in several respects, inferior to the human brain,” says Dmitri “Mitya” Chklovskii, a group leader in the Center for Computational Neuroscience (CCN) at the Flatiron Institute. “They’re very energy- and data-hungry. They hallucinate, and they can’t do simple things that we take for granted, like reasoning or planning,” he says. Each of these individual issues may trace back to one larger problem, he says: The foundations of these systems differ significantly from “the foundations on which the brain is built.”

The current building blocks of artificial neural networks are deeply rooted in a previous era. During that time, “the people who wanted to understand how the brain works and the people who wanted to build artificial brains or artificial intelligence were either the same people or close colleagues and collaborators,” Chklovskii says. “Then, sometime in the ’60s and ’70s, those two fields divorced and basically became fields of their own,” he says. That divergence has also led to artificial networks that are based on an outdated understanding of how biological brains function.

In the new work, Chklovskii and his colleagues revisit the fundamentals of artificial neural network architecture. For more than 10 years, Chklovskii had been on a quest for an alternative to the decades-old neural network building blocks used in machine learning. Through years of research, learning from real animal brains and innovation, Chklovskii and his team cracked the problem and found the solution he’d been dreaming of, one rooted in our modern understanding of the brain.

He and his team built a biologically inspired multilayer neural network made up of a new type of fundamental computational unit called rectified spectral units, or ReSUs. These ReSUs extract the features of the recent past that are most predictive of the near future. The ReSUs are self-supervised, meaning they control their own training of how they process data based on the information they receive, rather than relying on external instructions. ReSUs are designed to learn from constantly changing data, just as our brains learn from the real world.

This is in stark contrast to the current standard units, which are called rectified linear units (ReLUs). ReLUs, which have roots in a 1943 paper, were popularized about 15 years ago. In that paper, researchers presented “a very simple, but very primitive, model of a neuron,” Chklovskii says.

Building on that earlier model, researchers developed ReLU-based networks, which are commonly trained using a concept known as error backpropagation. This method calculates the contribution to past mistakes of each individual neuron in an artificial network, enabling the network to adjust and perform more accurately in the future. “But standard error backpropagation, as used in deep learning, is widely viewed as biologically implausible, and there is no evidence that the brain implements it in that form,” Chklovskii says.

Unlike the ReLUs, the novel ReSUs “actually care about the history of the input” they receive, says Shanshan Qin, a former CCN research scientist who is now an assistant professor of computational neuroscience and biophysics at Shanghai Jiao Tong University in China and lead author of the article that accompanied the AAAI presentation. That alternative setup, which doesn’t involve backpropagation, means ReSU networks are far closer analogs of what actually happens in the brain, he says.

The team’s ReSU neural network succeeded in a proof-of-principle test. The researchers created videos comprised of photographic images that drift in different directions, which were then used to train the network. “Imagine you are sitting on a train looking out the window. The trees, mountains, and houses outside appear to ‘slide’ horizontally across your vision. That sliding movement is a ‘translation,’” Qin says.

They demonstrated that a network trained on these videos exhibited learned two key features that resemble components of the fruit fly (Drosophila) visual system. The first feature is temporal filters, which sift through the input history that real or artificial neurons receive. These filters select certain signals to emphasize and others to ignore based on when the signals were received and other patterns that emerge within the system. Motion-selective units are the second key feature. These units only fire when movement occurs in a certain direction.

Instead of the researchers needing to directly instruct the system through coded rules, “we gave the network a blank slate,” Qin says. “We showed it the ‘train window’ videos (translating scenes). The network realized on its own: ‘To make sense of this data, I must remember what happened a split-second ago (temporal filters), and compare neighbor to neighbor (motion selection),” he says.

If the approach can be successfully scaled up, it could perform more complex computational tasks using rules similar to those that govern how neighboring neurons learn together. The approach may also excel in situations where the program lacks supervision and is using raw data that hasn’t been labeled or given additional context, Qin says.

The work not only brings AI closer to biology, but it also helps explain how biological systems operate, Qin says. “We can explain a lot of existing experimental data in fruit fly visual systems using this architecture,” he adds.

In the future, Chklovskii, Qin and colleagues hope to build on this work by developing ReSU-based neural networks based on different sensory systems — such as those responsible for smell and hearing — in animals ranging from fruit flies to humans. Such work would help reveal how those systems operate in nature and could reveal new ways of designing neural networks, Qin says.

Information for Press

Recent Articles