2596 Publications

SILVER: Single-loop variance reduction and application to federated learning

Kazusato Oko, Shunta Akiyama, D. Wu, Tomoya Murata, Taiji Suzuki

Most variance reduction methods require multiple times of full gradient computation, which is time-consuming and hence a bottleneck in application to distributed optimization. We present a single-loop variance-reduced gradient estimator named SILVER (SIngle-Loop VariancE-Reduction) for the finite-sum non-convex optimization, which does not require multiple full gradients but nevertheless achieves the optimal gradient complexity. Notably, unlike existing methods, SILVER provably reaches second-order optimality, with exponential convergence in the Polyak-Łojasiewicz (PL) region, and achieves further speedup depending on the data heterogeneity. Owing to these advantages, SILVER serves as a new base method to design communication-efficient federated learning algorithms: we combine SILVER with local updates which gives the best communication rounds and number of communicated gradients across all range of Hessian heterogeneity, and, at the same time, guarantees second-order optimality and exponential convergence in the PL region.

Show Abstract

Learning Associative Memories with Gradient Descent

Vivien Cabannes, B. Şimşek, A. Bietti

This work focuses on the training dynamics of one associative memory module storing outer products of token embeddings. We reduce this problem to the study of a system of particles, which interact according to properties of the data distribution and correlations between embeddings. Through theory and experiments, we provide several insights. In overparameterized regimes, we obtain logarithmic growth of the “classification margins.” Yet, we show that imbalance in token frequencies and memory interferences due to correlated embeddings lead to oscillatory transitory regimes. The oscillations are more pronounced with large step sizes, which can create benign loss spikes, although these learning rates speed up the dynamics and accelerate the asymptotic convergence. We also find that underparameterized regimes lead to suboptimal memorization schemes. Finally, we assess the validity of our findings on small Transformer models.

Show Abstract

Listening to the noise: Blind Denoising with Gibbs Diffusion

David Heurtel-Depeiges, C. Margossian, R. Ohana, B. Régaldo-Saint Blancard

In recent years, denoising problems have become intertwined with the development of deep generative models. In particular, diffusion models are trained like denoisers, and the distribution they model coincide with denoising priors in the Bayesian picture. However, denoising through diffusion-based posterior sampling requires the noise level and covariance to be known, preventing blind denoising. We overcome this limitation by introducing Gibbs Diffusion (GDiff), a general methodology addressing posterior sampling of both the signal and the noise parameters. Assuming arbitrary parametric Gaussian noise, we develop a Gibbs algorithm that alternates sampling steps from a conditional diffusion model trained to map the signal prior to the class of noise distributions, and a Monte Carlo sampler to infer the noise parameters. Our theoretical analysis highlights potential pitfalls, guides diagnostic usage, and quantifies errors in the Gibbs stationary distribution caused by the diffusion model. We showcase our method for 1) blind denoising of natural images involving colored noises with unknown amplitude and exponent, and 2) a cosmology problem, namely the analysis of cosmic microwave background data, where Bayesian inference of "noise" parameters means constraining models of the evolution of the Universe.

Show Abstract

Cytoplasmic stirring by active carpets

B. Chakrabarti, M. Rachh, S. Shvartsman, M. Shelley

Large cells often rely on cytoplasmic flows for intracellular transport, maintaining homeostasis, and positioning cellular components. Understanding the mechanisms of these flows is essential for gaining insights into cell function, developmental processes, and evolutionary adaptability. Here, we focus on a class of self-organized cytoplasmic stirring mechanisms that result from fluid–structure interactions between cytoskeletal elements at the cell cortex. Drawing inspiration from streaming flows in late-stage fruit fly oocytes, we propose an analytically tractable active carpet theory. This model deciphers the origins and three-dimensional spatiotemporal organization of such flows. Through a combination of simulations and weakly nonlinear theory, we establish the pathway of the streaming flow to its global attractor: a cell-spanning vortical twister. Our study reveals the inherent symmetries of this emergent flow, its low-dimensional structure, and illustrates how complex fluid–structure interaction aligns with classical solutions in Stokes flow. This framework can be easily adapted to elucidate a broad spectrum of self-organized, cortex-driven intracellular flows.

Show Abstract

Amortized Variational Inference: When and Why?

C. Margossian, D. Blei

In a probabilistic latent variable model, factorized (or mean-field) variational inference (F-VI) fits a separate parametric distribution for each latent variable. Amortized variational inference (A-VI) instead learns a common inference function, which maps each observation to its corresponding latent variable’s approximate posterior. Typically, A-VI is used as a cog in the training of variational autoencoders, however it stands to reason that A-VI could also be used as a general alternative to F-VI. In this paper we study when and why A-VI can be used for approximate Bayesian inference. We derive conditions on a latent variable model which are necessary, sufficient, and verifiable under which A-VI can attain F-VI’s optimal solution, thereby closing the amortization gap. We prove these conditions are uniquely verified by simple hierarchical models, a broad class that encompasses many models in machine learning. We then show, on a broader class of models, how to expand the domain of AVI’s inference function to improve its solution, and we provide examples, e.g. hidden Markov models, where the amortization gap cannot be closed.

Show Abstract

Heavy-Tailed Class Imbalance and Why Adam Outperforms Gradient Descent on Language Models

Frederik Kunstner, Robin Yadav, Alan Milligan, Mark Schmidt, A. Bietti

Adam has been shown to outperform gradient descent on large language models by a larger margin than on other tasks, but it is unclear why. We show that a key factor in this performance gap is the heavy-tailed class imbalance found in language tasks. When trained with gradient descent, the loss of infrequent words decreases more slowly than the loss of frequent ones. This leads to a slow decrease on the average loss as most samples come from infrequent words. On the other hand, Adam and sign-based methods are less sensitive to this problem. To establish that this behavior is caused by class imbalance, we show empirically that it can be reproduced across architectures and data types, on language transformers, vision CNNs, and linear models. On a linear model with cross-entropy loss, we show that class imbalance leads to imbalanced, correlated gradients and Hessians that have been hypothesized to benefit Adam. We also prove that, in continuous time, gradient descent converges slowly on low-frequency classes while sign descent does not.

Show Abstract

Context-invariant beliefs are supported by dynamic reconfiguration of single unit functional connectivity in prefrontal cortex of male macaques

Jean-Paul Noel, E. Balzani, Cristina Savin, D. Angelaki

Natural behaviors occur in closed action-perception loops and are supported by dynamic and flexible beliefs abstracted away from our immediate sensory milieu. How this real-world flexibility is instantiated in neural circuits remains unknown. Here, we have male macaques navigate in a virtual environment by primarily leveraging sensory (optic flow) signals, or by more heavily relying on acquired internal models. We record single-unit spiking activity simultaneously from the dorsomedial superior temporal area (MSTd), parietal area 7a, and the dorso-lateral prefrontal cortex (dlPFC). Results show that while animals were able to maintain adaptive task-relevant beliefs regardless of sensory context, the fine-grain statistical dependencies between neurons, particularly in 7a and dlPFC, dynamically remapped with the changing computational demands. In dlPFC, but not 7a, destroying these statistical dependencies abolished the area’s ability for cross-context decoding. Lastly, correlational analyses suggested that the more unit-to-unit couplings remapped in dlPFC, and the less they did so in MSTd, the less were population codes and behavior impacted by the loss of sensory evidence. We conclude that dynamic functional connectivity between neurons in prefrontal cortex maintain a stable population code and context-invariant beliefs during naturalistic behavior.

Show Abstract

E. coli do not count single molecules

H. Mattingly, Keita Kamino, Jude Ong, et al.

Organisms must perform sensory-motor behaviors to survive. What bounds or constraints limit behavioral performance? Previously, we found that the gradient-climbing speed of a chemotaxing Escherichia coli is near a bound set by the limited information they acquire from their chemical environments (1). Here we ask what limits their sensory accuracy. Past theoretical analyses have shown that the stochasticity of single molecule arrivals sets a fundamental limit on the precision of chemical sensing (2). Although it has been argued that bacteria approach this limit, direct evidence is lacking. Here, using information theory and quantitative experiments, we find that E. coli’s chemosensing is not limited by the physics of particle counting. First, we derive the physical limit on the behaviorally-relevant information that any sensor can get about a changing chemical concentration, assuming that every molecule arriving at the sensor is recorded. Then, we derive and measure how much information E. coli’s signaling pathway encodes during chemotaxis. We find that E. coli encode two orders of magnitude less information than an ideal sensor limited only by shot noise in particle arrivals. These results strongly suggest that constraints other than particle arrival noise limit E. coli’s sensory fidelity.

Show Abstract
July 9, 2024

Yardangs sculpted by erosion of heterogeneous material

Samuel Boury, S. Weady, Leif Ristroph, et. al.

The recognizable shapes of landforms arise from processes such as erosion by wind or water currents. However, explaining the physical origin of natural structures is challenging due to the coupled evolution of complex flow fields and three-dimensional (3D) topographies. We investigate these issues in a laboratory setting inspired by yardangs, which are raised, elongate formations whose characteristic shape suggests erosion of heterogeneous material by directional flows. We combine experiments and simulations to test an origin hypothesis involving a harder or less erodible inclusion embedded in an outcropping of softer material. Optical scans of clay objects fixed within flowing water reveal a transformation from a featureless mound to a yardang-like form resembling a lion in repose. Phase-field simulations reproduce similar shape dynamics and show their dependence on the erodibility contrast and flow strength. Through visualizations of the flow fields and analysis of the local erosion rate, we identify effects associated with flow funneling and the turbulent wake that are responsible for carving the unique geometrical features. This highly 3D scouring process produces complex shapes from simple and commonplace starting conditions and is thus a candidate explanation for natural yardangs. The methods introduced here should be generally useful for geomorphological problems and especially those for which material heterogeneity is a primary factor.

Show Abstract

Delayed rejection Hamiltonian Monte Carlo for sampling multiscale distributions

The efficiency of Hamiltonian Monte Carlo (HMC) can suffer when sampling a distribution with a wide range of length scales, because the small step sizes needed for stability in high-curvature regions are inefficient elsewhere. To address this we present a delayed rejection (DR) variant: if an initial HMC trajectory is rejected, we make one or more subsequent proposals each using a step size geometrically smaller than the last. To reduce the cost of DR approaches, we extend the standard delayed rejection to a probabilistic framework wherein we do not make multiple proposals at every rejection, but allow the probability of a retry to depend on the probability of accepting the previous proposal. We test the scheme in several sampling tasks, including statistical applications and multiscale model distributions such as Neal’s funnel. Delayed rejection enables sampling multiscale distributions for which standard approaches such as HMC fail to explore the tails, and improves performance five-fold over optimally-tuned HMC as measured by effective sample size per gradient evaluation. Even for simpler distributions, delayed rejection provides increased robustness to step size misspecification.

Show Abstract
  • Previous Page
  • Viewing
  • Next Page
Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates

privacy consent banner

Privacy preference

We use cookies to provide you with the best online experience. By clicking "Accept All," you help us understand how our site is used and enhance its performance. You can change your choice at any time here. To learn more, please visit our Privacy Policy.