2596 Publications

Modeling Neural Activity with Conditionally Linear Dynamical Systems

Victor Geadah, A. Nejatbakhsh, D. Lipshutz, J. Pillow, A. Williams

Neural population activity exhibits complex, nonlinear dynamics, varying in time, over trials, and across experimental conditions. Here, we develop Conditionally Linear Dynamical System (CLDS) models as a general-purpose method to characterize these dynamics. These models use Gaussian Process (GP) priors to capture the nonlinear dependence of circuit dynamics on task and behavioral variables. Conditioned on these covariates, the data is modeled with linear dynamics. This allows for transparent interpretation and tractable Bayesian inference. We find that CLDS models can perform well even in severely data-limited regimes (e.g. one trial per condition) due to their Bayesian formulation and ability to share statistical power across nearby task conditions. In example applications, we apply CLDS to model thalamic neurons that nonlinearly encode heading direction and to model motor cortical neurons during a cued reaching task

Show Abstract

The ManifoldEM method for cryo-EM: a step-by-step breakdown accompanied by a modern Python implementation

A. A. Ojha, R. Blackwell, M. Astore, S. Hanson, et al.

Resolving continuous conformational heterogeneity in single-particle cryo-electron microscopy (cryo-EM) is a field in which new methods are now emerging regularly. Methods range from traditional statistical techniques to state-of-the-art neural network approaches. Such ongoing efforts continue to enhance the ability to explore and understand the continuous conformational variations in cryo-EM data. One of the first methods was the manifold embedding approach or ManifoldEM. However, comparing it with more recent methods has been challenging due to software availability and usability issues. In this work, we introduce a modern Python implementation that is user-friendly, orders of magnitude faster than its previous versions and designed with a developer-ready environment. This implementation allows a more thorough evaluation of the strengths and limitations of methods addressing continuous conformational heterogeneity in cryo-EM, paving the way for further community-driven improvements.

Show Abstract

Accurate close interactions of Stokes spheres using lubrication-adapted image systems

Anna Broms, A. Barnett, Anna-Karin Tornberg

Stokes flows with near-touching rigid particles induce near-singular lubrication forces under relative motion, making their accurate numerical treatment challenging. With the aim of controlling the accuracy with a computationally cheap method, we present a new technique that combines the method of fundamental solutions (MFS) with the method of images. For rigid spheres, we propose to represent the flow using Stokeslet proxy sources on interior spheres, augmented by lines of image sources adapted to each near-contact to resolve lubrication. Source strengths are found by a least-squares solve at contact-adapted boundary collocation nodes. We include extensive numerical tests, and validate against reference solutions from a well-resolved boundary integral formulation. With less than 60 additional image sources per particle per contact, we show controlled uniform accuracy to three relative digits in surface velocities, and up to five digits in particle forces and torques, for all separations down to a thousandth of the radius. In the special case of flows around fixed particles, the proxy sphere alone gives controlled accuracy. A one-body preconditioning strategy allows acceleration with the fast multipole method, hence close to linear scaling in the number of particles. This is demonstrated by solving problems of up to 2000 spheres on a workstation using only 700 proxy sources per particle.

Show Abstract

The Geometry of Prompting: Unveiling Distinct Mechanisms of Task Adaptation in Language Models

A. Kirsanov, C. Chou , Kyunghyun Cho, S. Chung

Decoder-only language models have the ability to dynamically switch between various computational tasks based on input prompts. Despite many successful applications of prompting, there is very limited understanding of the internal mechanism behind such flexibility. In this work, we investigate how different prompting methods affect the geometry of representations in these models. Employing a framework grounded in statistical physics, we reveal that various prompting techniques, while achieving similar performance, operate through distinct representational mechanisms for task adaptation. Our analysis highlights the critical role of input distribution samples and label semantics in few-shot in-context learning. We also demonstrate evidence of synergistic and interfering interactions between different tasks on the representational level. Our work contributes to the theoretical understanding of large language models and lays the groundwork for developing more effective, representation-aware prompting strategies.

Show Abstract
February 11, 2025

Spatial Frequency Maps in Human Visual Cortex: A Replication and Extension

Jiyeong Ha, B. Broderick, Kendrick Kay, J. Winawer

In a step toward developing a model of human primary visual cortex, a recent study introduced a model of spatial frequency tuning in V1 (Broderick, Simoncelli, & Winawer, 2022). The model is compact, using just 9 parameters to predict BOLD response amplitude for locations across all of V1 as a function of stimulus orientation and spatial frequency. Here we replicated this analysis in a new dataset, the ‘nsdsynthetic’ supplement to the Natural Scenes Dataset (Allen et al., 2022), to assess generalization of model parameters. Furthermore, we extended the analyses to extrastriate maps V2 and V3. For each retinotopic map in the 8 NSD subjects, we fit the 9-parameter model. Despite many experimental differences between NSD and the original study, including stimulus size, experimental design, and MR field strength, there was good agreement in most model parameters. The dependence of preferred spatial frequency on eccentricity in V1 was similar between NSD and Broderick et al. Moreover, the effect of absolute stimulus orientation on spatial frequency maps was similar: higher preferred spatial frequency for horizontal and cardinal orientations compared to vertical and oblique orientations in both studies. The extension to extrastriate maps revealed that the biggest change in tuning between maps was in bandwidth: the bandwidth in spatial frequency tuning increased by 70% from V1 to V2 and 100% from V1 to V3, paralleling known increases in receptive field size. Together, the results show robust reproducibility and bring us closer to a systematic characterization of spatial encoding in the human visual system.

Show Abstract
February 5, 2025

Optimized Linear Measurements for Inverse Problems using Diffusion-Based Image Generation

We examine the problem of selecting a small set of linear measurements for reconstructing high-dimensional signals. Well-established methods for optimizing such measurements include principal component analysis (PCA), independent component analysis (ICA) and compressed sensing (CS) based on random projections, all of which rely on axis- or subspace-aligned statistical characterization of the signal source. However, many naturally occurring signals, including photographic images, contain richer statistical structure. To exploit such structure, we introduce a general method for obtaining an optimized set of linear measurements for efficient image reconstruction, where the signal statistics are expressed by the prior implicit in a neural network trained to perform denoising (generally known as a "diffusion model"). We demonstrate that the optimal measurements derived for two natural image datasets differ from those of PCA, ICA, or CS, and result in substantially lower mean squared reconstruction error. Interestingly, the marginal distributions of the measurement values are asymmetrical (skewed), substantially more so than those of previous methods. We also find that optimizing with respect to perceptual loss, as quantified by structural similarity (SSIM), leads to measurements different from those obtained when optimizing for MSE. Our results highlight the importance of incorporating the specific statistical regularities of natural signals when designing effective linear measurements.

Show Abstract

The No-Underrun Sampler: A Locally-Adaptive, Gradient-Free MCMC Method

N. Bou-Rabee, B. Carpenter, S. Liu, Stefan Oberdörster

In this work, we introduce the No-Underrun Sampler (NURS), a locally-adaptive, gradient-free Markov chain Monte Carlo method that blends ideas from Hit-and-Run and the No-U-Turn Sampler. NURS dynamically adapts to the local scale of the target distribution without requiring gradient evaluations, making it especially suitable for applications where gradients are unavailable or costly. We establish key theoretical properties, including reversibility, formal connections to Hit-and-Run and Random Walk Metropolis, Wasserstein contraction comparable to Hit-and-Run in Gaussian targets, and bounds on the total variation distance between the transition kernels of Hit-and-Run and NURS. Empirical experiments, supported by theoretical insights, illustrate the ability of NURS to sample from Neal's funnel, a challenging multi-scale distribution from Bayesian hierarchical inference.

Show Abstract

A fully adaptive, high-order, fast Poisson solver for complex two-dimensional geometries

We present a new framework for the fast solution of inhomogeneous elliptic boundary value problems in domains with smooth boundaries. High-order solvers based on adaptive box codes or the fast Fourier transform can efficiently treat the volumetric inhomogeneity, but require care to be taken near the boundary to ensure that the volume data is globally smooth. We avoid function extension or cut-cell quadratures near the boundary by dividing the domain into two regions: a bulk region away from the boundary that is efficiently treated with a truncated free-space box code, and a variable-width boundary-conforming strip region that is treated with a spectral collocation method and accompanying fast direct solver. Particular solutions in each region are then combined with Laplace layer potentials to yield the global solution. The resulting solver has an optimal computational complexity of O(N) for an adaptive discretization with N degrees of freedom. With an efficient two-dimensional (2D) implementation we demonstrate adaptive resolution of volumetric data, boundary data, and geometric features across a wide range of length scales, to typically 10-digit accuracy. The cost of all boundary corrections remains small relative to that of the bulk box code. The extension to 3D is expected to be straightforward in many cases because the strip

Show Abstract

Sub-cellular population imaging tools reveal stable apical dendrites in hippocampal area CA3

J. Moore, Shannon K. Rashid, Emmett Bicker, Cara D. Johnson, Naomi Codrington, D. Chklovskii, Jayeeta Basu

Apical and basal dendrites of pyramidal neurons receive anatomically and functionally distinct inputs, implying compartment-level functional diversity during behavior. To test this, we imaged in vivo calcium signals from soma, apical dendrites, and basal dendrites in mouse hippocampal CA3 pyramidal neurons during head-fixed navigation. To capture compartment-specific population dynamics, we developed computational tools to automatically segment dendrites and extract accurate fluorescence traces from densely labeled neurons. We validated the method on sparsely labeled preparations and synthetic data, predicting an optimal labeling density for high experimental throughput and analytical accuracy. Our method detected rapid, local dendritic activity. Dendrites showed robust spatial tuning, similar to soma but with higher activity rates. Across days, apical dendrites remained more stable and outperformed in decoding of the animal’s position. Thus, population-level apical and basal dendritic differences may reflect distinct compartment-specific input-output functions and computations in CA3. These tools will facilitate future studies mapping sub-cellular activity and their relation to behavior.

Show Abstract

Understanding Optimization in Deep Learning with Central Flows

J. Cohen, Alex Damian, Ameet Talwalkar, J Zico Kolter, Jason D. Lee

Optimization in deep learning remains poorly understood. A key difficulty is that optimizers exhibit complex oscillatory dynamics, referred to as "edge of stability," which cannot be captured by traditional optimization theory. In this paper, we show that the path taken by an oscillatory optimizer can often be captured by a central flow: a differential equation which directly models the time-averaged (i.e. smoothed) optimization trajectory. We empirically show that these central flows can predict long-term optimization trajectories for generic neural networks with a high degree of numerical accuracy. By interpreting these flows, we are able to understand how gradient descent makes progress even as the loss sometimes goes up; how adaptive optimizers ``adapt'' to the local loss landscape; and how adaptive optimizers implicitly seek out regions of weight space where they can take larger steps. These insights (and others) are not apparent from the optimizers' update rules, but are revealed by the central flows. Therefore, we believe that central flows constitute a promising tool for reasoning about optimization in deep learning.

Show Abstract
  • Previous Page
  • Viewing
  • Next Page
Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates

privacy consent banner

Privacy preference

We use cookies to provide you with the best online experience. By clicking "Accept All," you help us understand how our site is used and enhance its performance. You can change your choice at any time here. To learn more, please visit our Privacy Policy.