# Publications

70 Publications

### Opposing effects of selectivity and invariance in peripheral vision

Corey M. Ziemba, E. P. Simoncelli

Sensory processing necessitates discarding some information in service of preserving and reformatting more behaviorally relevant information. Sensory neurons seem to achieve this by responding selectively to particular combinations of features in their inputs, while averaging over or ignoring irrelevant combinations. Here, we expose the perceptual implications of this tradeoff between selectivity and invariance, using stimuli and tasks that explicitly reveal their opposing effects on discrimination performance. We generate texture stimuli with statistics derived from natural photographs, and ask observers to perform two different tasks: Discrimination between images drawn from families with different statistics, and discrimination between image samples with identical statistics. For both tasks, the performance of an ideal observer improves with stimulus size. In contrast, humans become better at family discrimination but worse at sample discrimination. We demonstrate through simulations that these behaviors arise naturally in an observer model that relies on a common set of physiologically plausible local statistical measurements for both tasks.

Show Abstract

### Pinpointing the neural signatures of single-exposure visual recognition memory

E. P. Simoncelli, V. Mehrpour, T. Meyer, N. C. Rust

Memories of the images that we have seen are thought to be reflected in the reduction of neural responses in high-level visual areas such as inferotemporal (IT) cortex, a phenomenon known as repetition suppression (RS). We challenged this hypothesis with a task that required rhesus monkeys to report whether images were novel or repeated while ignoring variations in contrast, a stimulus attribute that is also known to modulate the overall IT response. The monkeys’ behavior was largely contrast invariant, contrary to the predictions of an RS-inspired decoder, which could not distinguish responses to images that are repeated from those that are of lower contrast. However, the monkeys’ behavioral patterns were well predicted by a linearly decodable variant in which the total spike count was corrected for contrast modulation. These results suggest that the IT neural activity pattern that best aligns with single-exposure visual recognition memory behavior is not RS but rather sensory referenced suppression: reductions in IT population response magnitude, corrected for sensory modulation.

Show Abstract

### VolPy: automated and scalable 2 analysis pipelines for voltage 3 imaging datasets

J. Friedrich, C. Cai, E. Pnevmatikakis, K. Podgorski, A. Giovannucci

Voltage imaging enables monitoring neural activity at sub-millisecond and sub-cellular scale, unlocking the study of subthreshold activity, synchrony, and network dynamics with unprecedented spatio-temporal resolution. However, high data rates (>800MB/s) and low signal-to-noise ratios create bottlenecks for analyzing such datasets. Here we present VolPy, an automated and scalable pipeline to pre-process voltage imaging datasets. VolPy features motion correction, memory mapping, automated segmentation, denoising and spike extraction, all built on a highly parallelizable, modular, and extensible framework optimized for memory and speed. To aid automated segmentation, we introduce a corpus of 24 manually annotated datasets from different preparations, brain areas and voltage indicators. We benchmark VolPy against ground truth segmentation, simulations and electrophysiology recordings, and we compare its performance with existing algorithms in detecting spikes. Our results indicate that VolPy’s performance in spike extraction and scalability are state-of-the-art.

Show Abstract
2021

### Automated and scalable analysis pipelines for voltage imaging datasets

J. Friedrich, E. Pnevmatikakis, C. Cai, A. Singh, M. Hossein Eybposh, K. Podgorski, A. Giovannucci

Voltage imaging enables monitoring neural activity at sub-millisecond and sub-cellular scale, unlocking the study of subthreshold activity, synchrony, and network dynamics with unprecedented spatio-temporal resolution. However, high data rates (>800MB/s) and low signal-to-noise ratios create bottlenecks for analyzing such datasets. Here we present VolPy, an automated and scalable pipeline to pre-process voltage imaging datasets. VolPy features motion correction, memory mapping, automated segmentation, denoising and spike extraction, all built on a highly parallelizable, modular, and extensible framework optimized for memory and speed. To aid automated segmentation, we introduce a corpus of 24 manually annotated datasets from different preparations, brain areas and voltage indicators. We benchmark VolPy against ground truth segmentation, simulations and electrophysiology recordings, and we compare its performance with existing algorithms in detecting spikes. Our results indicate that VolPy’s performance in spike extraction and scalability are state-of-the-art.

Show Abstract
2021

### A biologically plausible neural network for multi-channel Canonical Correlation Analysis

Cortical pyramidal neurons receive inputs from multiple distinct neural populations and integrate these inputs in separate dendritic compartments. We explore the possibility that cortical microcircuits implement Canonical Correlation Analysis (CCA), an unsupervised learning method that projects the inputs onto a common subspace so as to maximize the correlations between the projections. To this end, we seek a multi-channel CCA algorithm that can be implemented in a biologically plausible neural network. For biological plausibility, we require that the network operates in the online setting and its synaptic update rules are local. Starting from a novel CCA objective function, we derive an online optimization algorithm whose optimization steps can be implemented in a single-layer neural network with multi-compartmental neurons and local non-Hebbian learning rules. We also derive an extension of our online CCA algorithm with adaptive output rank and output whitening. Interestingly, the extension maps onto a neural network whose neural architecture and synaptic updates resemble neural circuitry and synaptic plasticity observed experimentally in cortical pyramidal neurons.

Show Abstract
2021

### Sensitivity analysis for the stationary distribution of reflected Brownian motion in a convex polyhedral cone.

Reflected Brownian motion (RBM) in a convex polyhedral cone arises in a variety of applications ranging from the theory of stochastic networks to mathematical finance, and under general stability conditions, it has a unique stationary distribution. In such applications, to implement a stochastic optimization algorithm or quantify robustness of a model, it is useful to characterize the dependence of stationary performance measures on model parameters. In this paper, we characterize parametric sensitivities of the stationary distribution of an RBM in a simple convex polyhedral cone, that is, sensitivities to perturbations of the parameters that define the RBM—namely the covariance matrix, drift vector, and directions of reflection along the boundary of the polyhedral cone. In order to characterize these sensitivities, we study the long-time behavior of the joint process consisting of an RBM along with its so-called derivative process, which characterizes pathwise derivatives of RBMs on finite time intervals. We show that the joint process is positive recurrent and has a unique stationary distribution and that parametric sensitivities of the stationary distribution of an RBM can be expressed in terms of the stationary distribution of the joint process. This can be thought of as establishing an interchange of the differential operator and the limit in time. The analysis of ergodicity of the joint process is significantly more complicated than that of the RBM because of its degeneracy and the fact that the derivative process exhibits jumps that are modulated by the RBM. The proofs of our results rely on path properties of coupled RBMs and contraction properties related to the geometry of the polyhedral cone and directions of reflection along the boundary. Our results are potentially useful for developing efficient numerical algorithms for computing sensitivities of functionals of stationary RBMs.

Show Abstract
February 3, 2021

### Impression learning: Online predictive coding with synaptic plasticity

E. P. Simoncelli, C. Bredenberg, B. Lyo, C. Savin

Early sensory areas in the brain are faced with a task analogous to the scientific process itself: given raw data, they must extract meaningful information about its underlying structure. This process is particularly difficult, because the true underlying structure of the data is never revealed, so representation learning must be largely unsupervised. Framing this process in the language of Bayesian probabilities is tempting but difficult to connect to biology, because we still lack a satisfactory account of how the machinery of Bayesian inference and learning is implemented in neural circuits. Here, we provide a theoretical account of how learning to infer latent structure can be implemented in neural networks using local synaptic plasticity. To do this, we derive a learning algorithm in which synaptic plasticity is driven by a local error signal, computed by comparing stimulus-driven responses to internal model predictions (the network's impression'' of the data). We associate these components with the basal and apical dendritic compartments of pyramidal neurons. Our solution builds on the Wake/Sleep algorithm (Dayan et al., 1995) by allowing learning to occur online, and capture temporal dependencies in continuous input streams. Compared to a traditional three-factor plasticity rule (Williams, 1992), it is substantially more stable and data-efficient, which allows it to be used for learning statistics of high-dimensional inputs. It is also flexible in that it is applicable to both rate-based and spiking-based neural activity, as well as different network architectures. More generally, our model provides a potential theoretical bridge from mechanistic accounts of synaptic plasticity to algorithmic descriptions of unsupervised probabilistic learning and inference.

Show Abstract
2021

### Online analysis of microendoscopic 1-photon calcium imaging data streams

In vivo calcium imaging through microendoscopic lenses enables imaging of neuronal populations deep within the brains of freely moving animals. Previously, a constrained matrix factorization approach (CNMF-E) has been suggested to extract single-neuronal activity from microendoscopic data. However, this approach relies on offline batch processing of the entire video data and is demanding both in terms of computing and memory requirements. These drawbacks prevent its applicability to the analysis of large datasets and closed-loop experimental settings. Here we address both issues by introducing two different online algorithms for extracting neuronal activity from streaming microendoscopic data. Our first algorithm, OnACID-E, presents an online adaptation of the CNMF-E algorithm, which dramatically reduces its memory and computation requirements. Our second algorithm proposes a convolution-based background model for microendoscopic data that enables even faster (real time) processing. Our approach is modular and can be combined with existing online motion artifact correction and activity deconvolution methods to provide a highly scalable pipeline for microendoscopic data analysis. We apply our algorithms on four previously published typical experimental datasets and show that they yield similar high-quality results as the popular offline approach, but outperform it with regard to computing time and memory requirements. They can be used instead of CNMF-E to process pre-recorded data with boosted speeds and dramatically reduced memory requirements. Further, they newly enable online analysis of live-streaming data even on a laptop.

Show Abstract
2021

### CaImAn an open source tool for scalable calcium imaging data analysis

J. Friedrich, P. Gunn, A. Giovannucci , J. Kalfon, B. Brown, S. Koay, J. Taxidis, F. Najafi, J. Gauthier, P. Zhou, D. Chklovskii, E. Pnevmatikakis, B.S. Khakh, D.W. Tank

Advances in fluorescence microscopy enable monitoring larger brain areas in-vivo with finer time resolution. The resulting data rates require reproducible analysis pipelines that are reliable, fully automated, and scalable to datasets generated over the course of months. We present CaImAn, an open-source library for calcium imaging data analysis. CaImAn provides automatic and scalable methods to address problems common to pre-processing, including motion correction, neural activity identification, and registration across different sessions of data collection. It does this while requiring minimal user intervention, with good scalability on computers ranging from laptops to high-performance computing clusters. CaImAn is suitable for two-photon and one-photon imaging, and also enables real-time analysis on streaming data. To benchmark the performance of CaImAn we collected and combined a corpus of manual annotations from multiple labelers on nine mouse two-photon datasets. We demonstrate that CaImAn achieves near-human performance in detecting locations of active neurons.

Show Abstract
2021

### Interpretable Image Clustering via Diffeomorphism-Aware K-Means

R. Cosentino, Y. Bahroun, R. Balestriero, A. Sengupta, B. Aazhang, R. Baraniuk

We design an interpretable clustering algorithm aware of the nonlinear structure of image manifolds. Our approach leverages the interpretability of K-means applied in the image space while addressing its clustering performance issues. Specifically, we develop a measure of similarity between images and centroids that encompasses a general class of deformations: diffeomorphisms, rendering the clustering invariant to them. Our work leverages the Thin-Plate Spline interpolation technique to efficiently learn diffeomorphisms best characterizing the image manifolds. Extensive numerical simulations show that our approach competes with state-of-the-art methods on various datasets.

Show Abstract
2020
• Previous Page
• Viewing
• Next Page
Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates