2635 Publications

A Lightweight, Geometrically Flexible Fast Algorithm for the Evaluation of Layer and Volume Potentials

F. Fryklund, L. Greengard, S. Jiang, Samuel Potter

Over the last two decades, several fast, robust, and high-order accurate methods have been developed for solving the Poisson equation in complicated geometry using potential theory. In this approach, rather than discretizing the partial differential equation itself, one first evaluates a volume integral to account for the source distribution within the domain, followed by solving a boundary integral equation to impose the specified boundary conditions. Here, we present a new fast algorithm which is easy to implement and compatible with virtually any discretization technique, including unstructured domain triangulations, such as those used in standard finite element or finite volume methods. Our approach combines earlier work on potential theory for the heat equation, asymptotic analysis, the nonuniform fast Fourier transform (NUFFT), and the dual-space multilevel kernel-splitting (DMK) framework. It is insensitive to flaws in the triangulation, permitting not just nonconforming elements, but arbitrary aspect ratio triangles, gaps and various other degeneracies. On a single CPU core, the scheme computes the solution at a rate comparable to that of the fast Fourier transform (FFT) in work per gridpoint.

Show Abstract

Months-long stability of the head-direction system

Sofia Skromne Carrasco, G. Viejo, Adrien Peyrache

Spatial orientation enables animals to navigate their environment by rapidly mapping the external world and remembering key locations. In mammals, the head-direction (HD) system is an essential component of the navigation system of the brain. Although the tuning of neurons in other areas of this system is unstable—evidenced, for example, by the change in the spatial tuning of hippocampal place cells across days—the
stability of the neuronal code that underlies the sense of direction remains unclear. Here, by longitudinally tracking the activity of the same HD cells in the post-subiculum of freely moving mice, we show stability and plasticity at two levels. Although the population structure remained highly conserved across environments and over time, subtle shifts in population coherence encoded environment identity. In addition, the HD system established a distinct, environment-specific alignment between its internal representation and external landmarks, which persisted for weeks, even
after a single exposure. These findings suggest that the HD system forms long-lasting orientation memories that are anchored to specific environments.

Show Abstract

Neural population geometry and optimal coding of tasks with shared latent structure

Albert J. Wakhloo, Will Slatton, S. Chung

Animals can recognize latent structures in their environment and apply this information to efficiently navigate the world. Several works argue that the brain supports these abilities by forming neural representations from which behaviorally relevant variables can be read out across contexts and tasks. However, it is unclear which features of neural activity facilitate downstream readout. Here we analytically determine the geometric properties of neural activity that govern linear readout generalization on a set of tasks sharing a common latent structure. We show that four statistics summarizing the dimensionality, factorization and correlation structures of neural activity determine generalization. Early in learning, optimal neural representations are lower dimensional and exhibit higher correlations between single units and task variables than late in learning. We support these predictions through biological and artificial neural data analysis. Our results tie the linearly decodable information in neural population activity to its geometry.

Show Abstract

Exploring How Workflow Variations in Denaturation-Based Assays Impact Global Protein–Protein Interaction Predictions

Tavis J. Reed, Laura M. Haubold, O. Troyanskaya, et sl.

Protein denaturation-based assays, such as thermal proximity coaggregation (TPCA) and ion-based proteome-integrated solubility alteration (I-PISA), are powerful tools for characterizing global protein–protein interaction (PPI) networks. These workflows utilize different denaturation methods to probe PPIs, i.e., thermal- or ion-based. How denaturation differences influence PPI network mapping remained to be better understood. Here, we provide an experimental and computational characterization of the effect of the denaturation-based PPI assay on the observed PPI networks. We establish the value of both soluble and insoluble fractions in PPI prediction, determine the ability to minimize sample amount requirement, and assess different relative quantification methods during virus infection. Generating paired TPCA and I-PISA datasets, we define both overlapping sets of proteins and distinct PPI networks specifically captured by these methods. Assessing protein physical properties and subcellar localizations, we show that size, structural complexity, hydrophobicity, and localization influence PPI detection in a workflow-specific manner. We show that the insoluble fractions expand the detectable PPI landscape, underscoring their value in these workflows. Focusing on selected PPI networks (cytoskeletal and DNA repair), we observe the detection of distinct functional populations. Using influenza A infection as a model for cellular perturbation, we demonstrate that the integration of PPI predictions from soluble and insoluble workflows enhances the ability to build biologically informative and interconnected networks. Examining the effects of reducing starting material for TPCA assays, we find that PPI prediction quality remains robust when using a single well of a 96-well plate, a ∼500× reduction in sample input from usual workflows. Introducing simple workflow modifications, we show that label-free data-independent acquisition (DIA) TPCA yields performance comparable to the traditional tandem mass tag (TMT) data-dependent acquisition (DDA) TPCA workflow. This work provides insights into denaturation-based assays, highlights the value of insoluble fractions, and offers practical improvements for enhancing global PPI network mapping.

Show Abstract

Quasi Monte Carlo methods enable extremely low-dimensional deep generative models

Miles Martinez, A. Williams

This paper introduces quasi-Monte Carlo latent variable models (QLVMs): a class of deep generative models that are specialized for finding extremely low-dimensional and interpretable embeddings of high-dimensional datasets. Unlike standard approaches, which rely on a learned encoder and variational lower bounds, QLVMs directly approximate the marginal likelihood by randomized quasi-Monte Carlo integration. While this brute force approach has drawbacks in higher-dimensional spaces, we find that it excels in fitting one, two, and three dimensional deep latent variable models. Empirical results on a range of datasets show that QLVMs consistently outperform conventional variational autoencoders (VAEs) and importance weighted autoencoders (IWAEs) with matched latent dimensionality. The resulting embeddings enable transparent visualization and post hoc analyses such as nonparametric density estimation, clustering, and geodesic path computation, which are nontrivial to validate in higher-dimensional spaces. While our approach is compute-intensive and struggles to generate fine-scale details in complex datasets, it offers a compelling solution for applications prioritizing interpretability and latent space analysis.

Show Abstract
January 26, 2026

A Network of Biologically Inspired Rectified Spectral Units (ReSUs) Learns Hierarchical Features Without Error Backpropagation

S. Qin, J. Pughe-Sanford, A. Genkin, Pembe Gizem Ozdil, P. Greengard, A. Sengupta, D. Chklovskii

We introduce a biologically inspired, multilayer neural architecture composed of Rectified Spectral Units (ReSUs). Each ReSU projects a recent window of its input history onto a canonical direction obtained via canonical correlation analysis (CCA) of previously observed past-future input pairs, and then rectifies either its positive or negative component. By encoding canonical directions in synaptic weights and temporal filters, ReSUs implement a local, self-supervised algorithm for progressively constructing increasingly complex features.
To evaluate both computational power and biological fidelity, we trained a two-layer ReSU network in a self-supervised regime on translating natural scenes. First-layer units, each driven by a single pixel, developed temporal filters resembling those of Drosophila post-photoreceptor neurons (L1/L2 and L3), including their empirically observed adaptation to signal-to-noise ratio (SNR). Second-layer units, which pooled spatially over the first layer, became direction-selective -- analogous to T4 motion-detecting cells -- with learned synaptic weight patterns approximating those derived from connectomic reconstructions.
Together, these results suggest that ReSUs offer (i) a principled framework for modeling sensory circuits and (ii) a biologically grounded, backpropagation-free paradigm for constructing deep self-supervised neural networks.

Show Abstract
December 29, 2025

Comparing cryo-EM methods and molecular dynamics simulation to investigate heterogeneity in ligand-bound TRPV1

M. Astore, David Silva-Sánchez, R. Blackwell, P. Cossio, S. Hanson

Cryogenic electron microscopy (cryo-EM) has emerged as a powerful method for resolving the structure of biological macromolecules. Recently, several computational methods have been developed to study the heterogeneity of molecules in single-particle cryo-EM. In this study, we analyze a publicly available dataset of TRPV1 using five such methods: 3DFlex, 3DVA, cryoDRGN, ManifoldEM, and Bayesian ensemble reweighting. We find significant heterogeneity, but each method produces different results, with some detecting only compositional or conformational heterogeneity. To compare these diverse results, we develop AnaVox to quantitatively determine agreement between heterogeneity methods. Furthermore, applying Bayesian ensemble reweighting combined with molecular dynamics simulations supports the presence of these rarer states within the sample. This study shows that although current methods reveal the presence of heterogeneity, their stochasticity and potential bias present challenges for their routine use. However, with future development, these tools will enable the use of cryo-EM data for quantitative biophysical investigations.

Show Abstract

Age-related nigral downregulation of the Parkinson’s risk factor FAM49B primes human microglia for inflammaging

Jacqueline Martin, C. Park, O. Troyanskaya, et al.

Parkinson’s Disease (PD) is characterized by the loss of dopaminergic neurons in the substantia nigra pars compacta (SNpc), which is associated with changes in microglia function. While age remains the biggest risk factor, the underlying molecular cause of PD onset and its concurrent neuroinflammation are not well understood. Many identified PD risk genes have been directly linked to dopamine neuron impairment, while others are linked to immune cell function. In this study, we found that the PD risk gene FAM49B is critically expressed in microglia of the human SNpc and is downregulated with age and PD. We utilized human and murine microglia cells to demonstrate the role of FAM49B in regulating fundamental microglial functions such as cytoskeletal maintenance, migration, surface adherence, energy homeostasis, autophagy, and, importantly, inflammatory response. Downregulation of microglial FAM49B, as observed in the SNpc of aging individuals, led to significant alterations in these cellular functions, which are associated with increased microglial activation. Thus, our study highlights novel cell-type-specific roles of FAM49B and provides a potential mechanism for susceptibility to neuroinflammation, and reactive gliosis observed in both PD and normal aging.

Show Abstract
December 19, 2025

Disentangled representations via score-based variational autoencoders

Benjamin S. H. Lyo, C. Savin, E. P. Simoncelli

We present the Score-based Autoencoder for Multiscale Inference (SAMI), a method for unsupervised representation learning that combines the theoretical frameworks of diffusion models and VAEs. By unifying their respective evidence lower bounds, SAMI formulates a principled objective that learns representations through score-based guidance of the underlying diffusion process. The resulting representations automatically capture meaningful structure in the data: it recovers ground truth generative factors in our synthetic dataset, learns factorized, semantic latent dimensions from complex natural images, and encodes video sequences into latent trajectories that are straighter than those of alternative encoders, despite training exclusively on static images. Furthermore, SAMI can extract useful representations from pre-trained diffusion models with minimal additional training. Finally, the explicitly probabilistic formulation provides new ways to identify semantically meaningful axes in the absence of supervised labels, and its mathematical exactness allows us to make formal statements about the nature of the learned representation. Overall, these results indicate that implicit structural information in diffusion models can be made explicit and interpretable through synergistic combination with a variational autoencoder.

Show Abstract
December 18, 2025

Stabilizing the singularity swap quadrature for near-singular line integrals

David Krantz, A. Barnett, Anna-Karin Tornberg

Singularity swap quadrature (SSQ) is an effective method for the evaluation at nearby targets of potentials due to densities on curves in three dimensions. While highly accurate in most settings, it is known to suffer from catastrophic cancellation when the kernel exhibits both near-vanishing numerators and strong singularities, as arises with scalar double layer potentials or tensorial kernels in Stokes flow or linear elasticity. This precision loss turns out to be tied to the interpolation basis, namely monomial (for open curves) or Fourier (for closed curves). We introduce a simple yet powerful remedy: target-specific translated monomial and Fourier bases that explicitly incorporate the near-vanishing behavior of the kernel numerator. We combine this with a stable evaluation of the constant term which now dominates the integral, significantly reducing cancellation. We show that our approach achieves close to machine precision for prototype integrals, and up to ten orders of magnitude lower error than standard SSQ at extremely close evaluation distances, without significant additional computational cost.

Show Abstract
  • Previous Page
  • Viewing
  • Next Page
Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates

privacy consent banner

Privacy preference

We use cookies to provide you with the best online experience. By clicking "Accept All," you help us understand how our site is used and enhance its performance. You can change your choice at any time here. To learn more, please visit our Privacy Policy.