Building a Network That Learns Like We Do

At each instant, our brains reduce the fire hose of sensory input to simple realities. An artificial neural network suggests the brain relies on a particular simplification strategy.

At each instant, our senses gather oodles of sensory information, yet somehow our brains reduce that fire hose of input to simple realities: A car is honking. A bluebird is flying.

How does this happen?

One part of simplifying visual information is ‘dimensionality reduction.’ The brain, for instance, takes in an image made up of thousands of pixels and labels it ‘teapot.’ One such simplification strategy shows up repeatedly in the brain, and recent work from a team led by Dmitri Chklovskii, group leader for neuroscience at the Center for Computational Biology, suggests the strategy may be no accident.

Consider color. In the brain, one neuron may fire when a person looks at a green teapot, whereas another fires at a blue teapot. Neuroscientists say that these cells have localized receptive fields, as each neuron responds strongly to one hue, collectively spanning the entire rainbow. Similar setups allow us to distinguish aural pitches.


Recent Articles