2596 Publications

Discriminating image representations with principal distortions

J. Feather, D. Lipshutz, S. Harvey, A. Williams, E. P. Simoncelli

Image representations (artificial or biological) are often compared in terms of their global geometric structure; however, representations with similar global structure can have strikingly different local geometries. Here, we propose a framework for comparing a set of image representations in terms of their local geometries. We quantify the local geometry of a representation using the Fisher information matrix, a standard statistical tool for characterizing the sensitivity to local stimulus distortions, and use this as a substrate for a metric on the local geometry in the vicinity of a base image. This metric may then be used to optimally differentiate a set of models, by finding a pair of "principal distortions" that maximize the variance of the models under this metric. As an example, we use this framework to compare a set of simple models of the early visual system, identifying a novel set of image distortions that allow immediate comparison of the models by visual inspection. In a second example, we apply our method to a set of deep neural network models and reveal differences in the local geometry that arise due to architecture and training types. These examples demonstrate how our framework can be used to probe for informative differences in local sensitivities between complex models, and suggest how it could be used to compare model representations with human perception.

Show Abstract

Foundations of visual form selectivity in macaque areas V1 and V2

T. D. Oleskiw , Justin D. Lieber, E. P. Simoncelli, J. A. Movshon

Neurons early in the primate visual cortical pathway generate responses by combining signals from other neurons: some from downstream areas, some from within the same area, and others from areas upstream. Here we develop a model that selectively combines afferents derived from a population model of V1 cells. We use this model to account for responses we recorded of both V1 and V2 neurons in awake fixating macaque monkeys to stimuli composed of a sparse collection of locally oriented features ("droplets") designed to drive subsets of V1 neurons. The first stage computes the rectified responses of a fixed population of oriented filters at different scales that cover the visual field. The second stage computes a weighted combination of these first-stage responses, followed by a final nonlinearity, with parameters optimized to fit data from physiological recordings and constrained to encourage sparsity and locality. The fitted model accounts for the responses of both V1 and V2 neurons, capturing an average of 43% of the explainable variance for V1 and 38% for V2. The models fitted to droplet recordings predict responses to classical stimuli, such as gratings of different orientations and spatial frequencies, as well as to textures of different spectral content, which are known to be especially effective in driving V2. The models are less effective, however, at capturing the selectivity of responses to textures that include naturalistic image statistics. The pattern of afferents {\textemdash} defined by their weights over the 4 dimensions of spatial position, orientation, and spatial frequency {\textemdash} provides a common and interpretable characterization of the origin of many neuronal response properties in the early visual cortex.Competing Interest StatementThe authors have declared no competing interest.

Show Abstract

Discrete Lehmann representation of three-point functions

Dominik Kiese, Hugo U. R. Strand, Kun Chen, Nils Wentzell, Olivier Parcollet, J. Kaye

We present a generalization of the discrete Lehmann representation (DLR) to three-point correlation and vertex functions in imaginary time and Matsubara frequency. The representation takes the form of a linear combination of judiciously chosen exponentials in imaginary time, and products of simple poles in Matsubara frequency, which are universal for a given temperature and energy cutoff. We present a systematic algorithm to generate compact sampling grids, from which the coefficients of such an expansion can be obtained by solving a linear system. We show that the explicit form of the representation can be used to evaluate diagrammatic expressions involving infinite Matsubara sums, such as polarization functions or self-energies, with controllable, high-order accuracy. This collection of techniques establishes a framework through which methods involving three-point objects can be implemented robustly, with a substantially reduced computational cost and memory footprint.

Show Abstract

Contrastive-equivariant self-supervised learning improves alignment with primate visual area IT

T. Yerxa, J. Feather, E. P. Simoncelli, S. Chung

Models trained with self-supervised learning objectives have recently matched or surpassed models trained with traditional supervised object recognition in their ability to predict neural responses of object-selective neurons in the primate visual system. A self-supervised learning objective is arguably a more biologically plausible organizing principle, as the optimization does not require a large number of labeled examples. However, typical self-supervised objectives may result in network representations that are overly invariant to changes in the input. Here, we show that a representation with structured variability to input transformations is better aligned with known features of visual perception and neural computation. We introduce a novel framework for converting standard invariant SSL losses into “contrastive-equivariant” versions that encourage preservation of input transformations without supervised access to the transformation parameters. We demonstrate that our proposed method systematically increases the ability of models to predict responses in macaque inferior temporal cortex. Our results demonstrate the promise of incorporating known features of neural computation into task-optimization for building better models of visual cortex.

Show Abstract

Learning predictable and robust neural representations by straightening image sequences

X. Niu, Cristina Savin, E. P. Simoncelli

Prediction is a fundamental capability of all living organisms, and has been proposed as an objective for learning sensory representations. Recent work demonstrates that in primate visual systems, prediction is facilitated by neural representations that follow straighter temporal trajectories than their initial photoreceptor encoding, which allows for prediction by linear extrapolation. Inspired by these experimental findings, we develop a self-supervised learning (SSL) objective that explicitly quantifies and promotes straightening. We demonstrate the power of his objective in training deep feedforward neural networks on smoothly-rendered synthetic image sequences that mimic commonly-occurring properties of natural videos. The learned model contains neural embeddings that are predictive, but also factorize the geometric, photometric, and semantic attributes of objects. The representations also prove more robust to noise and adversarial attacks compared to previous SSL methods that optimize for invariance to random augmentations. Moreover, these beneficial properties can be transferred to other training procedures by using the straightening objective as a regularizer, suggesting a broader utility of straightening as a principle for robust unsupervised learning.

Show Abstract

Shaping the distribution of neural responses with interneurons in a recurrent circuit model

D. Lipshutz, E. P. Simoncelli

Efficient coding theory posits that sensory circuits transform natural signals into neural representations that maximize information transmission subject to resource constraints. Local interneurons are thought to play an important role in these transformations, dynamically shaping patterns of local circuit activity to facilitate and direct information flow. However, the relationship between these coordinated, nonlinear, circuit-level transformations and the properties of interneurons (e.g., connectivity, activation functions, response dynamics) remains unknown. Here, we propose a normative computational model that establishes such a relationship. Our model is derived from an optimal transport objective that conceptualizes the circuit’s input-response function as transforming the inputs to achieve an efficient target response distribution. The circuit, which is comprised of primary neurons that are recurrently connected to a set of local interneurons, continuously optimizes this objective by dynamically adjusting both the synaptic connections between neurons as well as the interneuron activation functions. In an example application motivated by redundancy reduction, we construct a circuit that learns a dynamical nonlinear transformation that maps natural image data to a spherical Gaussian, significantly reducing statistical dependencies in neural responses. Overall, our results provide a framework in which the distribution of circuit responses is systematically and nonlinearly controlled by adjustment of interneuron connectivity and activation functions.

Show Abstract

Quantifying Differences in Neural Population Activity With Shape Metrics

Joao Barbosa, A. Nejatbakhsh, L. Duong, S. Harvey, Scott L. Brincat, Markus Siegel, Earl K. Miller, A. Williams

Quantifying differences across species and individuals is fundamental to many fields of biology. However, it remains challenging to draw detailed functional comparisons between large populations of interacting neurons. Here, we introduce a general framework for comparing neural population activity in terms of shape distances. This approach defines similarity in terms of explicit geometric transformations, which can be flexibly specified to obtain different measures of population-level neural similarity. Moreover, differences between systems are defined by a distance that is symmetric and satisfies the triangle inequality, enabling downstream analyses such as clustering and nearest-neighbor regression. We demonstrate this approach on datasets spanning multiple behavioral tasks (navigation, passive viewing of images, and decision making) and species (mice and non-human primates), highlighting its potential to measure functional variability across subjects and brain regions, as well as its ability to relate neural geometry to animal behavior.

Show Abstract

Disentangling Interacting Systems with Fermionic Gaussian Circuits: Application to the Single Impurity Anderson Model

Ang-Kun Wu, B. Kloss, Wladislaw Krinitsin, M. Fishman, J. Pixley, M. Stoudenmire

Tensor network quantum states are powerful tools for strongly correlated systems, tailored to capture local correlations such as in ground states with entanglement area laws. When applying tensor network states to interacting fermionic systems, a proper choice of the basis or orbitals can reduce the bond dimension of tensors and provide physically relevant orbitals. We introduce such a change of basis with unitary gates obtained from compressing fermionic Gaussian states into quantum circuits corresponding to various tensor networks. These circuits can reduce the ground-state entanglement entropy and improve the performance of algorithms such as the density matrix renormalization group. We study the Anderson impurity model with one and two impurities to show the potential of the method for improving computational efficiency and interpreting impurity physics. Furthermore, fermionic Gaussian circuits can also suppress entanglement during the time evolution out of low-energy state. Last, we consider Gaussian multiscale entanglement renormalization ansatz (GMERA) circuits which compress fermionic Gaussian states hierarchically. The emergent coarse-grained physical models from these GMERA circuits are studied in terms of their entanglement properties and suitability for performing time evolution.

Show Abstract

In vivo measurements of receptor tyrosine kinase activity reveal feedback regulation of a developmental gradient

Emily K. Ho , Rebecca P. Kim-Yip, S. Shvartsman, et al.

A lack of tools for detecting receptor activity in vivo has limited our ability to fully explore receptor-level control of developmental patterning. Here, we extend a new class of biosensors for receptor tyrosine kinase (RTK) activity, the pYtag system, to visualize endogenous RTK activity in Drosophila. We build biosensors for three Drosophila RTKs that function across developmental stages and tissues. By characterizing Torso::pYtag during terminal patterning in the early embryo, we find that Torso activity differs from downstream ERK activity in two surprising ways: Torso activity is narrowly restricted to the poles but produces a broader gradient of ERK, and Torso activity decreases over developmental time while ERK activity is sustained. This decrease in Torso activity is driven by ERK pathway-dependent negative feedback. Our results suggest an updated model of terminal patterning where a narrow domain of Torso activity, tuned in amplitude by negative feedback, locally activates signaling effectors which diffuse through the syncytial embryo to form the ERK gradient. Altogether, this work highlights the usefulness of pYtags for investigating receptor-level regulation of developmental patterning.

Show Abstract
January 7, 2025

Geometric model for dynamics of motor-driven centrosomal asters

Yuan-Nan Young, Vicente Gomez Herrera, Huan Zhang, R. Farhadifar, M. Shelley

The centrosomal aster is a mobile and adaptable cellular organelle that exerts and transmits forces necessary for tasks such as nuclear migration and spindle positioning. Recent experimental and theoretical studies of nematode and human cells demonstrate that pulling forces on asters by cortically anchored force generators are dominant during such processes. Here, we present a comprehensive investigation of the S-model (S for stoichiometry) of aster dynamics based solely on such forces. The model evolves the astral centrosome position, a probability field of cell-surface motor occupancy by centrosomal microtubules (under an assumption of stoichiometric binding), and free boundaries of unattached, growing microtubules. We show how cell shape affects the stability of centering of the aster, and its transition to oscillations with increasing motor number. Seeking to understand observations in single-cell nematode embryos, we use highly accurate simulations to examine the nonlinear structures of the bifurcations, and demonstrate the importance of binding domain overlap to interpreting genetic perturbation experiments. We find a generally rich dynamical landscape, dependent upon cell shape, such as internal constant-velocity equatorial orbits of asters that can be seen as traveling wave solutions. Finally, we study the interactions of multiple asters which we demonstrate an effective mutual repulsion due to their competition for surface force generators. We find, amazingly, that centrosomes can relax onto the vertices of platonic and nonplatonic solids, very closely mirroring the results of the classical Thomson problem for energy-minimizing configurations of electrons constrained to a sphere and interacting via repulsive Coulomb potentials. Our findings both explain experimental observations, providing insights into the mechanisms governing spindle positioning and cell division dynamics, and show the possibility of new nonlinear phenomena in cell biology.

Show Abstract
  • Previous Page
  • Viewing
  • Next Page
Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates

privacy consent banner

Privacy preference

We use cookies to provide you with the best online experience. By clicking "Accept All," you help us understand how our site is used and enhance its performance. You can change your choice at any time here. To learn more, please visit our Privacy Policy.