59 Publications

The surprising secret identity of the semidefinite relaxation of K-means: manifold learning

M. Tepper, A.M. Sengupta, D. Chklovskii

In recent years, semidefinite programs (SDP) have been the subject of interesting research in the field of clustering. In many cases, these convex programs deliver the same answers as non-convex alternatives and come with a guarantee of optimality. Unexpectedly, we find that a popular semidefinite relaxation of K-means (SDP-KM), learns manifolds present in the data, something not possible with the original K-means formulation. To build an intuitive understanding of its manifold learning capabilities, we develop a theoretical analysis of SDP-KM on idealized datasets. Additionally, we show that SDP-KM even segregates linearly non-separable manifolds. SDP-KM is convex and the globally optimal solution can be found by generic SDP solvers with polynomial time complexity. To overcome poor performance of these solvers on large datasets, we explore efficient algorithms based on the explicit Gramian representation of the problem. These features render SDP-KM a versatile and interesting tool for manifold learning while remaining amenable to theoretical analysis.

Show Abstract
June 19, 2017

Resource-efficient perceptron has sparse synaptic weight distribution

C. Pehlevan, A. Sengupta

Resource-efficiency is important for biological function of neurons. Using the perceptron as a model of a neuron, we show that resource-efficient learning implies sparse neural connectivity. The perceptron associates inputs to outputs by adjusting its synaptic weights. The learned synaptic weights are proposed to be the most resource-efficient by minimizing a biological resource cost given by the total absolute synaptic weight (l1-norm). Analytical methods from statistical physics and numerical simulations demonstrate that a resource-efficient perceptron has sparse connectivity. Sparseness decreases and resource usage increases with the number of associations to be learned. Our results have implications for synaptic connectivity in the cerebellum, where supervised learning is believed to happen.

Show Abstract

The comprehensive connectome of a neural substrate for ‘ON’ motion detection in Drosophila

S. Takemura, A. Nern, D. Chklovskii, L.K. Scheffer, G. Rubin, I.A. Meinertzhagen

Analysing computations in neural circuits often uses simplified models because the actual neuronal implementation is not known. For example, a problem in vision, how the eye detects image motion, has long been analysed using Hassenstein-Reichardt (HR) detector or Barlow-Levick (BL) models. These both simulate motion detection well, but the exact neuronal circuits undertaking these tasks remain elusive. We reconstructed a comprehensive connectome of the circuits of Drosophila‘s motion-sensing T4 cells using a novel EM technique. We uncover complex T4 inputs and reveal that putative excitatory inputs cluster at T4’s dendrite shafts, while inhibitory inputs localize to the bases. Consistent with our previous study, we reveal that Mi1 and Tm3 cells provide most synaptic contacts onto T4. We are, however, unable to reproduce the spatial offset between these cells reported previously. Our comprehensive connectome reveals complex circuits that include candidate anatomical substrates for both HR and BL types of motion detectors.

Show Abstract
April 22, 2017

The comprehensive connectome of a neural substrate for ‘ON’ motion detection in Drosophila

D. Chklovskii, S.Takemura, A. Nern, L. Scheffer, G. Rubin, I. Meinertzhagen

Analysing computations in neural circuits often uses simplified models because the actual neuronal implementation is not known. For example, a problem in vision, how the eye detects image motion, has long been analysed using Hassenstein-Reichardt (HR) detector or Barlow-Levick (BL) models. These both simulate motion detection well, but the exact neuronal circuits undertaking these tasks remain elusive. We reconstructed a comprehensive connectome of the circuits of Drosophila‘s motion-sensing T4 cells using a novel EM technique. We uncover complex T4 inputs and reveal that putative excitatory inputs cluster at T4’s dendrite shafts, while inhibitory inputs localize to the bases. Consistent with our previous study, we reveal that Mi1 and Tm3 cells provide most synaptic contacts onto T4. We are, however, unable to reproduce the spatial offset between these cells reported previously. Our comprehensive connectome reveals complex circuits that include candidate anatomical substrates for both HR and BL types of motion detectors.

Show Abstract
2017

Why do similarity matching objectives lead to Hebbian/anti-Hebbian networks?

Modeling self-organization of neural networks for unsupervised learning using Hebbian and anti-Hebbian plasticity has a long history in neuroscience. Yet, derivations of single-layer networks with such local learning rules from principled optimization objectives became possible only recently, with the introduction of similarity matching objectives. What explains the success of similarity matching objectives in deriving neural networks with local learning rules? Here, using dimensionality reduction as an example, we introduce several variable substitutions that illuminate the success of similarity matching. We show that the full network objective may be optimized separately for each synapse using local learning rules both in the offline and online settings. We formalize the long-standing intuition of the rivalry between Hebbian and anti-Hebbian rules by formulating a min-max optimization problem. We introduce a novel dimensionality reduction objective using fractional matrix exponents. To illustrate the generality of our approach, we apply it to a novel formulation of dimensionality reduction combined with whitening. We confirm numerically that the networks with learning rules derived from principled objectives perform better than those with heuristic learning rules.

Show Abstract
March 23, 2017

Cerebellar granule cells acquire a widespread predictive feedback signal during motor learning

Cerebellar granule cells, which constitute half the brain's neurons, supply Purkinje cells with contextual information necessary for motor learning, but how they encode this information is unknown. Here we show, using two-photon microscopy to track neural activity over multiple days of cerebellum-dependent eyeblink conditioning in mice, that granule cell populations acquire a dense representation of the anticipatory eyelid movement. Initially, granule cells responded to neutral visual and somatosensory stimuli as well as periorbital airpuffs used for training. As learning progressed, two-thirds of monitored granule cells acquired a conditional response whose timing matched or preceded the learned eyelid movements. Granule cell activity covaried trial by trial to form a redundant code. Many granule cells were also active during movements of nearby body structures. Thus, a predictive signal about the upcoming movement is widely available at the input stage of the cerebellar cortex, as required by forward models of cerebellar control.

Show Abstract
March 20, 2017

Integrative neuromechanics of crawling in D. melanogaster larvae

Locomotion in an organism is a consequence of the coupled interaction between brain, body and environment. Motivated by qualitative observations and quantitative perturbations of crawling in Drosophila melanogaster larvae, we construct a minimal integrative mathematical model for its locomotion. Our model couples the excitation-inhibition circuits in the nervous system to force production in the muscles and body movement in a frictional environment, thence linking neural dynamics to body mechanics via sensory feedback in a heterogeneous environment. Our results explain the basic observed phenomenology of crawling with and without proprioception, and elucidate the stabilizing role that proprioception plays in producing a robust crawling phenotype in the presence of biological perturbations. More generally, our approach allows us to make testable predictions on the effect of changing body-environment interactions on crawling, and serves as a step in the development of hierarchical models linking cellular processes to behavior.

Show Abstract
July 25, 2016

Neocortex: a lean mean memory storage machine

D. Chklovskii, B. Mizusaki, A. Stepanyants, P. Sjöström

Connectivity patterns of neocortex exhibit several odd properties: for example, most neighboring excitatory neurons do not connect, which seems curiously wasteful. Brunel’s elegant theoretical treatment reveals how optimal information storage can naturally impose these peculiar properties.

Show Abstract

Neocortex: a lean mean storage machine

B.E.P Mizusaki, A. Stepanyants, D. Chklovskii, P.J. Sjöström

Connectivity patterns of neocortex exhibit several odd properties: for example, most neighboring excitatory neurons do not connect, which seems curiously wasteful. Brunel’s elegant theoretical treatment reveals how optimal information storage can naturally impose these peculiar properties.

Show Abstract
April 26, 2016

Derivation of Neural Circuits from the Similarity Matching Principle

Our brains analyze high-dimensional datasets streamed by our sensory organs in multiple stages. Sensory cortices, for example, perform tasks like dimensionality reduction, sparse feature discovery and clustering. To model these tasks we pursue an approach analogous to use of action principles in physics and propose a new family of objective functions based on the principle of similarity matching. From these objective functions we derive online distributed algorithms that can be implemented by biological neural networks resembling cortical circuits. Our networks can adapt to changes in the number of latent dimensions or the number of clusters in the input dataset. Furthermore, we formulate minimax optimization problems from which we derive online algorithms with two classes of neurons identified with principal neurons and interneurons in biological circuits. In addition to bearing resemblance to biological circuits, our algorithms are competitive for Big Data applications.

Show Abstract
  • Previous Page
  • Viewing
  • Next Page
Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates