2573 Publications

Neurons as Detectors of Coherent Sets in Sensory Dynamics

We model sensory streams as observations from high-dimensional stochastic dynamical systems and conceptualize sensory neurons as self-supervised learners of compact representations of such dynamics. From prior experience, neurons learn coherent sets-regions of stimulus state space whose trajectories evolve cohesively over finite times-and assign membership indices to new stimuli. Coherent sets are identified via spectral clustering of the stochastic Koopman operator (SKO), where the sign pattern of a subdominant singular function partitions the state space into minimally coupled regions. For multivariate Ornstein-Uhlenbeck processes, this singular function reduces to a linear projection onto the dominant singular vector of the whitened state-transition matrix. Encoding this singular vector as a receptive field enables neurons to compute membership indices via the projection sign in a biologically plausible manner. Each neuron detects either a predictive coherent set (stimuli with common futures) or a retrospective coherent set (stimuli with common pasts), suggesting a functional dichotomy among neurons. Since neurons lack access to explicit dynamical equations, the requisite singular vectors must be estimated directly from data, for example, via past-future canonical correlation analysis on lag-vector representations-an approach that naturally extends to nonlinear dynamics. This framework provides a novel account of neuronal temporal filtering, the ubiquity of rectification in neural responses, and known functional dichotomies. Coherent-set clustering thus emerges as a fundamental computation underlying sensory processing and transferable to bio-inspired artificial systems.

Show Abstract
October 30, 2025

Scalable inference of functional neural connectivity at submillisecond timescales

A. Medvedeva, E. Balzani, A. Williams, Stephen L Keeley

The Poisson Generalized Linear Model (GLM) is a foundational tool for analyzing neural spike train data. However, standard implementations rely on discretizing spike times into binned count data, limiting temporal resolution and scalability. Here, we develop Monte Carlo (MC) methods and polynomial approximations (PA) to the continuous-time analog of these models, and show them to be advantageous over their discrete-time counterparts. Further, we propose using a set of exponentially scaled Laguerre polynomials as an orthogonal temporal basis, which improves filter identification and yields closed-form integral solutions under the polynomial approximation. Applied to both synthetic and real spike-time data from rodent hippocampus, our methods demonstrate superior accuracy and scalability compared to traditional binned GLMs, enabling functional connectivity inference in large-scale neural recordings that are temporally precise on the order of synaptic dynamical timescales and in agreement with known anatomical properties of hippocampal subregions. We provide open-source implementations of both MC and PA estimators, optimized for GPU acceleration, to facilitate adoption in the neuroscience community.

Show Abstract
October 23, 2025

Learning a distance measure from the information-estimation geometry of data

We introduce the Information-Estimation Metric (IEM), a novel form of distance function derived from an underlying continuous probability density over a domain of signals. The IEM is rooted in a fundamental relationship between information theory and estimation theory, which links the log-probability of a signal with the errors of an optimal denoiser, applied to noisy observations of the signal. In particular, the IEM between a pair of signals is obtained by comparing their denoising error vectors over a range of noise amplitudes. Geometrically, this amounts to comparing the score vector fields of the blurred density around the signals over a range of blur levels. We prove that the IEM is a valid global metric and derive a closed-form expression for its local second-order approximation, which yields a Riemannian metric. For Gaussian-distributed signals, the IEM coincides with the Mahalanobis distance. But for more complex distributions, it adapts, both locally and globally, to the geometry of the distribution. In practice, the IEM can be computed using a learned denoiser (analogous to generative diffusion models) and solving a one-dimensional integral. To demonstrate the value of our framework, we learn an IEM on the ImageNet database. Experiments show that this IEM is competitive with or outperforms state-of-the-art supervised image quality metrics in predicting human perceptual judgments.

Show Abstract

Correcting Non-Uniform Milling in FIB-SEM Images with Unsupervised Cross-Plane Image-to-Image Translation

Yicong Li, Yuri Kreinin, Siyu Huang, E. Schomburg, D. Chklovskii, Hanspeter Pfister, J. Wu

Motivation Focused Ion Beam Scanning Electron Microscopy (FIB-SEM) is an advanced Volume Electron Microscopy technology with growing applications, featuring thinner sectioning compared to other Volume Electron Microscopes. Such axial resolution is crucial for accurate segmentation and reconstruction of fine structures in biological tissues. However, in reality, the milling thickness is not always uniform across the sample surface, resulting in the axial plane looking distorted. Existing image processing approaches often: (i) assume constant section thickness; (ii) consist of multiple separate processing steps (i.e., not in an end-to-end fashion); (iii) require ground truth images for modeling, which may entail significant labor and be unsuitable for rapid analysis.

Results We develop a deep learning method to correct non-uniform milling artifacts observed in FIB-SEM images. The proposed method is an image-to-image translation technique that can mitigate image distortions in an unsupervised manner. It conducts cross-plane learning within 3D image volumes without any ground truth annotations. We demonstrate the efficacy of our method on a real-world micro-wasp dataset, showcasing significantly improved image quality after correction with qualitative and quantitative analysis.

Show Abstract
October 1, 2025

Coherent dynamics of thalamic head-direction neurons irrespective of input

G. Viejo, Sofia Skromne Carrasco, Adrien Peyrache

While the thalamus is known to relay and modulate sensory signals to the cortex, whether it also participates in active computation and intrinsic signal generation remains unresolved. The anterodorsal nucleus of the thalamus broadcasts the head-direction (HD) signal, which is generated in the brainstem, particularly in the upstream lateral mammillary nucleus, and thalamic HD cells remain coordinated even during sleep. Here, by recording and manipulating neuronal activity along the mammillary–thalamic–cortical pathway, we show that coherence among thalamic HD cells persists even when their upstream inputs are decorrelated, particularly during non-Rapid Eye Movement sleep. These findings suggest that thalamic circuits are sufficient to generate and maintain coherent population dynamics in the absence of structured input.

Show Abstract
September 16, 2025

Statistical mechanics of support vector regression

A key problem in deep learning and computational neuroscience is relating the geometrical properties of neural representations to task performance. Here, we consider this problem for continuous decoding tasks where neural variability may affect task precision. Using methods from statistical mechanics, we study the average-case learning curves for ɛ-insensitive support vector regression and discuss its capacity as a measure of linear decodability. Our analysis reveals a phase transition in training error at a critical load, capturing the interplay between the tolerance parameter ɛ and neural variability. We uncover a double-descent phenomenon in the generalization error, showing that ɛ acts as a regularizer, both suppressing and shifting these peaks. Theoretical predictions are validated both with toy models and deep neural networks, extending the theory of support vector machines to continuous tasks with inherent neural variability.

Show Abstract

Active Liquid Crystal Theory Explains the Collective Organization of Microtubules in Human Mitotic Spindles

Colm P. Kelleher, S. Maddu, Mustafa Basaran, Thomas Müller-Reichert, M. Shelley, D. Needleman

How thousands of microtubules and molecular motors self-organize into spindles remains poorly understood. By combining static, nanometer-resolution, large-scale electron tomography reconstructions and dynamic, optical-resolution, polarized light microscopy, we test an active liquid crystal continuum model of mitotic spindles in human tissue culture cells. The predictions of this coarse-grained theory quantitatively agree with the experimentally measured spindle morphology and fluctuation spectra. These findings argue that local interactions and polymerization produce collective alignment, diffusive-like motion, and polar transport which govern the behaviors of the spindle's microtubule network, and provide a means to measure the spindle's material properties. This work demonstrates that a coarse-grained theory featuring measurable, physically-interpretable parameters can quantitatively describe the mechanical behavior and self-organization of human mitotic spindles.

Show Abstract
July 29, 2025

Stability of co-annular active and passive confined fluids

Tanumoy Dhar, M. Shelley, D. Saintillan

The translation and shape deformations of a passive viscous Newtonian droplet immersed in an active nematic liquid crystal under circular confinement are analyzed using a linear stability analysis. We focus on the case of a sharply aligned active nematic in the limit of strong elastic relaxation in two dimensions. Using an active liquid crystal model, we employ the Lorentz reciprocal theorem for Stokes flow to study the growth of interfacial perturbations as a result of both active and elastic stresses. Instabilities are uncovered in both extensile and contractile systems, for which growth rates are calculated and presented in terms of the dimensionless ratios of active, elastic, and capillary stresses, as well as the viscosity ratio between the two fluids. We also extend our theory to analyze the inverse scenario, namely, the stability of an active nematic droplet surrounded by a passive viscous layer. Our results highlight the subtle interplay of capillary, active, elastic, and viscous stresses in governing droplet stability. The instabilities uncovered here may be relevant to a plethora of biological active systems, from the dynamics of passive droplets in bacterial suspensions to the organization of subcellular compartments inside the cell and cell nucleus.

Show Abstract
July 25, 2025

Comprehensive characterization of human color discrimination thresholds

Fangfang Hong, Ruby Bouhassira, Jason Chow, Craig Sanders, Michael Shvartsman, Phillip Guan, A. Williams, D. H. Brainard

Discrimination thresholds reveal the limits of human perception; scientists have studied them since the time of Fechner in the 1800s. Forced-choice psychophysical methods combined with the method of constant stimuli or parametric adaptive trial-placement procedures are well-suited for measuring one-dimensional psychometric functions. However, extending these methods to characterize psychometric fields in higher-dimensional stimulus spaces, such as three-dimensional color space, poses a significant challenge. Here, we introduce a novel Wishart Process Psychophysical Model (WPPM) that leverages the smooth variation of threshold across stimulus space. We demonstrate the use of the WPPM in conjunction with a non-parametric adaptive trial-placement procedure by characterizing the full psychophysical field for color discrimination in the isoluminant plane. Each participant (N = 8) completed between 6,000 and 6,466 three-alternative forced-choice (3AFC) oddity color discrimination trials. The WPPM was fit to these trials. Importantly, once fit, the WPPM allows readout of discrimination performance between any pair of stimuli, providing a comprehensive characterization of the psychometric field. In addition, the WPPM readouts were validated for each participant by comparison with 25 probe psychometric functions. These were measured with an additional 6,000 trials per participant that were held out from the WPPM fit. The dataset offers a foundational resource for developing perceptual color metrics and for benchmarking mechanistic models of color processing. This approach is broadly generalizable to other perceptual domains …

Show Abstract

Representational drift and learning-induced stabilization in the piriform cortex

Guillermo B. Morales, Miguel A. Muñoz, Y. Tu

The brain encodes external stimuli through patterns of neural activity, forming internal representations of the world. Increasing experimental evidence showed that neural representations for a specific stimulus can change over time in a phenomenon called “representational drift” (RD). However, the underlying mechanisms for this widespread phenomenon remain poorly understood. Here, we study RD in the piriform cortex of the olfactory system with a realistic neural network model that incorporates two general mechanisms for synaptic weight dynamics operating at two well-separated timescales: spontaneous multiplicative fluctuations on a scale of days and spike-timing-dependent plasticity (STDP) effects on a scale of seconds. We show that the slow multiplicative fluctuations in synaptic sizes, which lead to a steady-state distribution of synaptic weights consistent with experiments, can induce RD effects that are in quantitative agreement with recent empirical evidence. Furthermore, our model reveals that the fast STDP learning dynamics during presentation of a given odor drives the system toward a low-dimensional representational manifold, which effectively reduces the dimensionality of synaptic weight fluctuations and thus suppresses RD. Specifically, our model explains why representations of already “learned” odors drift slower than unfamiliar ones, as well as the dependence of the drift rate with the frequency of stimulus presentation—both of which align with recent experimental data. The proposed model not only offers a simple explanation for the emergence of RD and its relation to learning in the piriform cortex, but also provides a general theoretical framework for studying representation dynamics in other neural systems.

Show Abstract
  • Previous Page
  • Viewing
  • Next Page
Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates

privacy consent banner

Privacy preference

We use cookies to provide you with the best online experience. By clicking "Accept All," you help us understand how our site is used and enhance its performance. You can change your choice at any time here. To learn more, please visit our Privacy Policy.