421 Publications

Classical variational phase-field models cannot predict fracture nucleation

Oscar Lopez-Pamies, John E. Dolbow, G. Francfort, Christopher J. Larsen

Notwithstanding the evidence against them, classical variational phase-field models continue to be used and pursued in an attempt to describe fracture nucleation in elastic brittle materials. In this context, the main objective of this paper is to provide a comprehensive review of the existing evidence against such a class of models as descriptors of fracture nucleation. To that end, a review is first given of the plethora of experimental observations of fracture nucleation in nominally elastic brittle materials under quasi-static loading conditions, as well as of classical variational phase-field models, without and with energy splits. These models are then confronted with the experimental observations. The conclusion is that they cannot possibly describe fracture nucleation in general. This because classical variational phase-field models cannot account for material strength as an independent macroscopic material property. The last part of the paper includes a brief summary of a class of phase-field models that can describe fracture nucleation. It also provides a discussion of how pervasively material strength has been overlooked in the analysis of fracture at large, as well as an outlook into the modeling of fracture nucleation beyond the basic setting of elastic brittle materials.

Show Abstract

Decomposing imaginary time Feynman diagrams using separable basis functions: Anderson impurity model strong coupling expansion

J. Kaye, Zhen Huang, Hugo Strand, Denis Golez

We present a deterministic algorithm for the efficient evaluation of imaginary time diagrams based on the recently introduced discrete Lehmann representation (DLR) of imaginary time Green's functions. In addition to the efficient discretization of diagrammatic integrals afforded by its approximation properties, the DLR basis is separable in imaginary time, allowing us to decompose diagrams into linear combinations of nested sequences of one-dimensional products and convolutions. Focusing on the strong coupling bold-line expansion of generalized Anderson impurity models, we show that our strategy reduces the computational complexity of evaluating an $M$th-order diagram at inverse temperature $\beta$ and spectral width $\omega_{\max}$ from $\mathcal{O}((\beta \omega_{\max})^{2M-1})$ for a direct quadrature to $\mathcal{O}(M (\log (\beta \omega_{\max}))^{M+1})$, with controllable high-order accuracy. We benchmark our algorithm using third-order expansions for multi-band impurity problems with off-diagonal hybridization and spin-orbit coupling, presenting comparisons with exact diagonalization and quantum Monte Carlo approaches. In particular, we perform a self-consistent dynamical mean-field theory calculation for a three-band Hubbard model with strong spin-orbit coupling representing a minimal model of Ca$_2$RuO$_4$, demonstrating the promise of the method for modeling realistic strongly correlated multi-band materials. For both strong and weak coupling expansions of low and intermediate order, in which diagrams can be enumerated, our method provides an efficient, straightforward, and robust black-box evaluation procedure. In this sense, it fills a gap between diagrammatic approximations of the lowest order, which are simple and inexpensive but inaccurate, and those based on Monte Carlo sampling of high-order diagrams.

Show Abstract

Cosmological constraints from non-Gaussian and nonlinear galaxy clustering using the SimBIG inference framework

ChangHoon Hahn, Pablo Lemos, Liam Parker, B. Régaldo-Saint Blancard, M. Eickenberg, Shirley Ho, Ph.D. , Jiamin Hou, Elena Massara , Chirag Modi , Azadeh Moradinezhad Dizgah, David Spergel, Ph.D.

The standard ΛCDM cosmological model predicts the presence of cold dark matter, with the current accelerated expansion of the Universe driven by dark energy. This model has recently come under scrutiny because of tensions in measurements of the expansion and growth histories of the Universe, parameterized using H0 and S8. The three-dimensional clustering of galaxies encodes key cosmological information that addresses these tensions. Here we present a set of cosmological constraints using simulation-based inference that exploits additional non-Gaussian information on nonlinear scales from galaxy clustering, inaccessible with current analyses. We analyse a subset of the Baryon Oscillation Spectroscopic Survey (BOSS) galaxy survey using SimBIG, a new framework for cosmological inference that leverages high-fidelity simulations and deep generative models. We use two clustering statistics beyond the standard power spectrum: the bispectrum and a summary of the galaxy field based on a convolutional neural network. We constrain H0 and S8 1.5 and 1.9 times more tightly than power spectrum analyses. With this increased precision, our constraints are competitive with those of other cosmological probes, even with only 10% of the full BOSS volume. Future work extending SimBIG to upcoming spectroscopic galaxy surveys (DESI, PFS, Euclid) will produce improved cosmological constraints that will develop understanding of cosmic tensions.

Show Abstract

Incorporating Local Step-Size Adaptivity into the No-U-Turn Sampler using Gibbs Self Tuning

N. Bou-Rabee, B. Carpenter, Tore Selland Kleppe, Milo Marsden

Adapting the step size locally in the no-U-turn sampler (NUTS) is challenging because the step-size and path-length tuning parameters are interdependent. The determination of an optimal path length requires a predefined step size, while the ideal step size must account for errors along the selected path. Ensuring reversibility further complicates this tuning problem. In this paper, we present a method for locally adapting the step size in NUTS that is an instance of the Gibbs self-tuning (GIST) framework. Our approach guarantees reversibility with an acceptance probability that depends exclusively on the conditional distribution of the step size. We validate our step-size-adaptive NUTS method on Neal's funnel density and a high-dimensional normal distribution, demonstrating its effectiveness in challenging scenarios.

Show Abstract

Learning Multi-Index Models with Neural Networks via Mean-Field Langevin Dynamics

Alireza Mousavi-Hosseini, D. Wu, Murat A. Erdogdu

We study the problem of learning multi-index models in high-dimensions using a two-layer neural network trained with the mean-field Langevin algorithm. Under mild distributional assumptions on the data, we characterize the effective dimension $d_{\mathrm{eff}}$ that controls both sample and computational complexity by utilizing the adaptivity of neural networks to latent low-dimensional structures. When the data exhibit such a structure, $d_{\mathrm{eff}}$ can be significantly smaller than the ambient dimension. We prove that the sample complexity grows almost linearly with $d_{\mathrm{eff}}$, bypassing the limitations of the information and generative exponents that appeared in recent analyses of gradient-based feature learning. On the other hand, the computational complexity may inevitably grow exponentially with $d_{\mathrm{eff}}$ in the worst-case scenario. Motivated by improving computational complexity, we take the first steps towards polynomial time convergence of the mean-field Langevin algorithm by investigating a setting where the weights are constrained to be on a compact manifold with positive Ricci curvature, such as the hypersphere. There, we study assumptions under which polynomial time convergence is achievable, whereas similar assumptions in the Euclidean setting lead to exponential time complexity.

Show Abstract

CryoBench: Diverse and challenging datasets for the heterogeneity problem in cryo-EM

Minkyu Jeon, M. Astore, S. Hanson, P. Cossio, et al.

Cryo-electron microscopy (cryo-EM) is a powerful technique for determining high-resolution 3D biomolecular structures from imaging data. As this technique can capture dynamic biomolecular complexes, 3D reconstruction methods are increasingly being developed to resolve this intrinsic structural heterogeneity. However, the absence of standardized benchmarks with ground truth structures and validation metrics limits the advancement of the field. Here, we propose CryoBench, a suite of datasets, metrics, and performance benchmarks for heterogeneous reconstruction in cryo-EM. We propose five datasets representing different sources of heterogeneity and degrees of difficulty. These include conformational heterogeneity generated from simple motions and random configurations of antibody complexes and from tens of thousands of structures sampled from a molecular dynamics simulation. We also design datasets containing compositional heterogeneity from mixtures of ribosome assembly states and 100 common complexes present in cells. We then perform a comprehensive analysis of state-of-the-art heterogeneous reconstruction tools including neural and non-neural methods and their sensitivity to noise, and propose new metrics for quantitative comparison of methods. We hope that this benchmark will be a foundational resource for analyzing existing methods and new algorithmic development in both the cryo-EM and machine learning communities.

Show Abstract

cppdlr: Imaginary time calculations using the discrete Lehmann representation

J. Kaye, Hugo U. r. Strand, Nils Wentzell

We introduce cppdlr, a C++ library implementing the discrete Lehmann representation (DLR) of functions in imaginary time and Matsubara frequency, such as Green's functions and self-energies. The DLR is based on a low-rank approximation of the analytic continuation kernel, and yields a compact and explicit basis consisting of exponentials in imaginary time and simple poles in Matsubara frequency. cppdlr constructs the DLR basis and associated interpolation grids, and implements standard operations. It provides a flexible yet high-level interface, facilitating the incorporation of the DLR into both small-scale applications and existing large-scale software projects.

Show Abstract

Amortized template-matching of molecular conformations from cryo-electron microscopy images using simulation-based inference

Lars Dingeldein, David Silva-Sánchez, L. Evans, P. Cossio, et al.

Biomolecules undergo conformational changes to perform their function. Cryo-electron microscopy (cryo-EM) can capture snapshots of biomolecules in various conformations. However, these images are noisy and display the molecule in unknown orientations, making it difficult to separate conformational differences from differences due to noise or projection directions. Here, we introduce cryo-EM simulation-based inference (cryoSBI) to infer the conformations of biomolecules and the uncertainties associated with the inference from individual cryo-EM images. CryoSBI builds on simulation-based inference, a combination of physics-based simulations and probabilistic deep learning, allowing us to use Bayesian inference even when likelihoods are too expensive to calculate. We begin with an ensemble of conformations, which can be templates from molecular simulations or modelling, and use them as structural hypotheses. We train a neural network approximating the Bayesian posterior using simulated images from these templates, and then use it to accurately infer the conformations of biomolecules from experimental images. Training is only done once, and after that, it takes just a few milliseconds to make inference on an image, making cryoSBI suitable for arbitrarily large datasets. CryoSBI eliminates the need to estimate particle pose and imaging parameters, significantly enhancing the computational speed in comparison to explicit likelihood methods. We illustrate and benchmark cryoSBI on synthetic data and showcase its promise on experimental single-particle cryo-EM data.

Show Abstract
2024

Nested R̂ : Assessing the Convergence of Markov Chain Monte Carlo When Running Many Short Chains

C. Margossian, Matthew D. Hoffman, Pavel Sountsov, Lionel Riou-Durand, Aki Vehtari, Andrew Gelman

Recent developments in parallel Markov chain Monte Carlo (MCMC) algorithms allow us to run thousands of chains almost as quickly as a single chain, using hardware accelerators such as GPUs. While each chain still needs to forget its initial point during a warmup phase, the subsequent sampling phase can be shorter than in classical settings, where we run only a few chains. To determine if the resulting short chains are reliable, we need to assess how close the Markov chains are to their stationary distribution after warmup. The potential scale reduction factor Rˆ is a popular convergence diagnostic but unfortunately can require a long sampling phase to work well. We present a nested design to overcome this challenge and a generalization called nested Rˆ. This new diagnostic works under conditions similar to Rˆ and completes the workflow for GPU-friendly samplers. In addition, the proposed nesting provides theoretical insights into the utility of Rˆ, in both classical and short-chains regimes.

Show Abstract

MoMo: Momentum Models for Adaptive Learning Rates

Fabian Schaipp, R. Ohana, M. Eickenberg, Aaron Defazio, R. M. Gower

Training a modern machine learning architecture on a new task requires extensive learning-rate tuning, which comes at a high computational cost. Here we develop new Polyak-type adaptive learning rates that can be used on top of any momentum method, and require less tuning to perform well. We first develop MoMo, a Momentum Model based adaptive learning rate for SGD-M (stochastic gradient descent with momentum). MoMo uses momentum estimates of the batch losses and gradients sampled at each iteration to build a model of the loss function. Our model also makes use of any known lower bound of the loss function by using truncation, e.g. most losses are lower-bounded by zero. The models is then approximately minimized at each iteration to compute the next step. We show how MoMo can be used in combination with any momentum-based method, and showcase this by developing MoMo-Adam - which is Adam with our new model-based adaptive learning rate. We show that MoMo attains a $\mathcal{O}(1/\sqrt{K})$ convergence rate for convex problems with interpolation, needing knowledge of no problem-specific quantities other than the optimal value. Additionally, for losses with unknown lower bounds, we develop on-the-fly estimates of a lower bound, that are incorporated in our model. We demonstrate that MoMo and MoMo-Adam improve over SGD-M and Adam in terms of robustness to hyperparameter tuning for training image classifiers on MNIST, CIFAR, and Imagenet, for recommender systems on the Criteo dataset, for a transformer model on the translation task IWSLT14, and for a diffusion model.

Show Abstract
  • Previous Page
  • Viewing
  • Next Page
Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates