381 Publications

Adaptive Monte Carlo augmented with normalizing flows

M. Gabrié, Grant M. Rotskoff, E. Vanden-Eijnden

Many problems in the physical sciences, machine learning, and statistical inference necessitate sampling from a high-dimensional, multi-modal probability distribution. Markov Chain Monte Carlo (MCMC) algorithms, the ubiquitous tool for this task, typically rely on random, reversible, and local updates to propagate configurations of a given system in a way that ensures that generated configurations will be distributed according to a target probability distribution asymptotically. In high-dimensional settings with multiple relevant metastable basins, local approaches require either immense computational effort or intricately designed importance sampling strategies to capture information about, for example, the relative populations of such basins. Here we analyze a framework for augmenting MCMC sampling with nonlocal transition kernels parameterized with generative models known as normalizing flows. We focus on a setting where there is no preexisting data, as is commonly the case for problems in which MCMC is used. Our results emphasize that the implementation of the normalizing flow must be adapted to the structure of the target distribution in order to preserve the statistics of the target at all scales. Furthermore, we analyze the propensity of our algorithm to discover new states and demonstrate the importance of initializing the training with some

Show Abstract
arXiv e-prints
May 26, 2021

cuFINUFFT: a load-balanced GPU library for general-purpose nonuniform FFTs

Yu-hsuan Shih, Garrett Wright, Joakim Andén, Johannes Blaschke, A. Barnett

Nonuniform fast Fourier transforms dominate the computational cost in many applications including image reconstruction and signal processing. We thus present a general-purpose GPU-based CUDA library for type 1 (nonuniform to uniform) and type 2 (uniform to nonuniform) transforms in dimensions 2 and 3, in single or double precision. It achieves high performance for a given user-requested accuracy, regardless of the distribution of nonuniform points, via cache-aware point reordering, and load-balanced blocked spreading in shared memory. At low accuracies, this gives on-GPU throughputs around $10^9$ nonuniform points per second, and (even including host-device transfer) is typically 4-10$\times$ faster than the latest parallel CPU code FINUFFT (at 28 threads). It is competitive with two established GPU codes, being up to 90$\times$ faster at high accuracy and/or type 1 clustered point distributions. Finally we demonstrate a 6-18$\times$ speedup versus CPU in an X-ray diffraction 3D iterative reconstruction task at $10^{-12}$ accuracy, observing excellent multi-GPU weak scaling up to one rank per GPU. paper talk: https://www.youtube.com/watch?v=PnW6ehMyHxM.

Show Abstract

AI-assisted super-resolution cosmological simulations

Y. Li, Yueying Ni, Rupert A. C. Croft, Tiziana Di Matteo, Simeon Bird, Yu Feng

Cosmological simulations of galaxy formation are limited by finite computational resources. We draw from the ongoing rapid advances in Artificial Intelligence (specifically Deep Learning) to address this problem. Neural networks have been developed to learn from high-resolution (HR) image data, and then make accurate super-resolution (SR) versions of different low-resolution (LR) images. We apply such techniques to LR cosmological N-body simulations, generating SR versions. Specifically, we are able to enhance the simulation resolution by generating 512 times more particles and predicting their displacements from the initial positions. Therefore our results can be viewed as new simulation realizations themselves rather than projections, e.g., to their density fields. Furthermore, the generation process is stochastic, enabling us to sample the small-scale modes conditioning on the large-scale environment. Our model learns from only 16 pairs of LR-HR simulations, and is then able to generate SR simulations that successfully reproduce the matter power spectrum and the halo mass function of the HR targets. We successfully deploy the model in a box 1000 times larger than the training simulation box, showing that high-resolution mock surveys can be generated rapidly. We conclude that AI assistance has the potential to revolutionize modeling of small-scale galaxy formation physics in large cosmological volumes.

Show Abstract

Recovering missing data in coherent diffraction imaging

D. Barmherzig, A. Barnett, C. Epstein, L. Greengard, J. Magland, M. Rachh

In coherent diffraction imaging (CDI) experiments, the intensity of the scattered wave impinging on an object is measured on an array of detectors. This signal can be interpreted as the square of the modulus of the Fourier transform of the unknown scattering density. A beam-stop obstructs the forward scattered wave and, hence, the modulus Fourier data from a neighborhood of k=0 cannot be measured. In this note, we describe a linear method for recovering this unmeasured modulus Fourier data from the measured values and an estimate of the support of the image's autocorrelation function without consideration of phase retrieval. We analyze the conditioning of this problem, which grows exponentially with the modulus of the maximum spatial frequency not measured, and the effects of noise.

Show Abstract

AI-assisted superresolution cosmological simulations

Y. Li, Yueying Ni, Rupert A. C. Croft, Tiziana Di Matteo, Simeon Bird, Yu Feng

Cosmological simulations of galaxy formation are limited by finite computational resources. We draw from the ongoing rapid advances in Artificial Intelligence (specifically Deep Learning) to address this problem. Neural networks have been developed to learn from high-resolution (HR) image data, and then make accurate super-resolution (SR) versions of different low-resolution (LR) images. We apply such techniques to LR cosmological N-body simulations, generating SR versions. Specifically, we are able to enhance the simulation resolution by generating 512 times more particles and predicting their displacements from the initial positions. Therefore our results can be viewed as new simulation realizations themselves rather than projections, e.g., to their density fields. Furthermore, the generation process is stochastic, enabling us to sample the small-scale modes conditioning on the large-scale environment. Our model learns from only 16 pairs of small-volume LR-HR simulations, and is then able to generate SR simulations that successfully reproduce the HR matter power spectrum to percent level up to 16h−1Mpc, and the HR halo mass function to within 10% down to 1011M⊙. We successfully deploy the model in a box 1000 times larger than the training simulation box, showing that high-resolution mock surveys can be generated rapidly. We conclude that AI assistance has the potential to revolutionize modeling of small-scale galaxy formation physics in large cosmological volumes.

Show Abstract

Low rank compression in the numerical solution of the nonequilibrium Dyson equation

We propose a method to improve the computational and memory efficiency of numerical solvers for the nonequilibrium Dyson equation in the Keldysh formalism. It is based on the empirical observation that the nonequilibrium Green's functions and self energies arising in many problems of physical interest, discretized as matrices, have low rank off-diagonal blocks, and can therefore be compressed using a hierarchical low rank data structure. We describe an efficient algorithm to build this compressed representation on the fly during the course of time stepping, and use the representation to reduce the cost of computing history integrals, which is the main computational bottleneck. For systems with the hierarchical low rank property, our method reduces the computational complexity of solving the nonequilibrium Dyson equation from cubic to near quadratic, and the memory complexity from quadratic to near linear. We demonstrate the full solver for the Falicov-Kimball model exposed to a rapid ramp and Floquet driving of system parameters, and are able to increase feasible propagation times substantially. We present examples with 262144 time steps, which would require approximately five months of computing time and 2.2 TB of memory using the direct time stepping method, but can be completed in just over a day on a laptop with less than 4 GB of memory using our method. We also confirm the hierarchical low rank property for the driven Hubbard model in the weak coupling regime within the GW approximation, and in the strong coupling regime within dynamical mean-field theory.

Show Abstract

Fast multipole methods for evaluation of layer potentials with locally-corrected quadratures

L. Greengard, Michael O'Neil, M. Rachh, Felipe Vico

While fast multipole methods (FMMs) are in widespread use for the rapid evaluation of potential fields governed by the Laplace, Helmholtz, Maxwell or Stokes equations, their coupling to high-order quadratures for evaluating layer potentials is still an area of active research. In three dimensions, a number of issues need to be addressed, including the specification of the surface as the union of high-order patches, the incorporation of accurate quadrature rules for integrating singular or weakly singular Green's functions on such patches, and their coupling to the oct-tree data structures on which the FMM separates near and far field interactions. Although the latter is straightforward for point distributions, the near field for a patch is determined by its physical dimensions, not the distribution of discretization points on the surface. Here, we present a general framework for efficiently coupling locally corrected quadratures with FMMs, relying primarily on what are called generalized Gaussian quadratures rules, supplemented by adaptive integration. The approach, however, is quite general and easily applicable to other schemes, such as Quadrature by Expansion (QBX). We also introduce a number of accelerations to reduce the cost of quadrature generation itself, and present several numerical examples of acoustic scattering that demonstrate the accuracy, robustness, and computational efficiency of the scheme. On a single core of an Intel i5 2.3GHz processor, a Fortran implementation of the scheme can generate near field quadrature corrections for between 1000 and 10,000 points per second, depending on the order of accuracy and the desired precision. A Fortran implementation of the algorithm described in this work is available at https://gitlab.com/fastalgorithms/fmm3dbiethis https URL.

Show Abstract

Learning physically consistent differential equation models from data using group sparsity

Suryanarayana Maddu, Bevan L Cheeseman, Ivo F Sbalzarini, C. Müller

We propose a statistical learning framework based on group-sparse regression that can be used to 1) enforce conservation laws, 2) ensure model equivalence, and 3) guarantee symmetries when learning or inferring differential-equation models from measurement data. Directly learning interpretable mathematical models from data has emerged as a valuable modeling approach. However, in areas like biology, high noise levels, sensor-induced correlations, and strong inter-system variability can render data-driven models nonsensical or physically inconsistent without additional constraints on the model structure. Hence, it is important to leverage prior knowledge from physical principles to learn "biologically plausible and physically consistent" models rather than models that simply fit the data best. We present a novel group Iterative Hard Thresholding (gIHT) algorithm and use stability selection to infer physically consistent models with minimal parameter tuning. We show several applications from systems biology that demonstrate the benefits of enforcing priors in data-driven modeling.

Show Abstract

deepregression: a Flexible Neural Network Framework for Semi-Structured Deep Distributional Regression

David Rügamer, Ruolin Shen, Christina Bukas, Dominik Thalmeier, Nadja Klein, Chris Kolb, Florian Pfisterer, Philipp Kopper, Bernd Bischl, others, C. Müller

This paper describes the implementation of semi-structured deep distributional regression, a flexible framework to learn distributions based on a combination of additive regression models and deep neural networks. deepregression is implemented in both R and Python, using the deep learning libraries TensorFlow and PyTorch, respectively. The implementation consists of (1) a modular neural network building system for the combination of various statistical and deep learning approaches, (2) an orthogonalization cell to allow for an interpretable combination of different subnetworks as well as (3) pre-processing steps necessary to initialize such models. The software package allows to define models in a user-friendly manner using distribution definitions via a formula environment that is inspired by classical statistical model frameworks such as mgcv. The packages' modular design and functionality provides a unique resource for rapid and reproducible prototyping of complex statistical and deep learning models while simultaneously retaining the indispensable interpretability of classical statistical models.

Show Abstract
April 6, 2021
  • Previous Page
  • Viewing
  • Next Page
Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates

privacy consent banner

Privacy preference

We use cookies to provide you with the best online experience. By clicking "Accept All," you help us understand how our site is used and enhance its performance. You can change your choice at any time here. To learn more, please visit our Privacy Policy.