150 Publications

A causal view on compositional data

Elisabeth Ailer, Niki Kilbertus, C. Müller

Many scientific datasets are compositional in nature. Important examples include species abundances in ecology, rock compositions in geology, topic compositions in large-scale text corpora, and sequencing count data in molecular biology. Here, we provide a causal view on compositional data in an instrumental variable setting where the composition acts as the cause. Throughout, we pay particular attention to the interpretation of compositional causes from the viewpoint of interventions and crisply articulate potential pitfalls for practitioners. Focusing on modern high-dimensional microbiome sequencing data as a timely illustrative use case, our analysis first reveals that popular one-dimensional information-theoretic summary statistics, such as diversity and richness, may be insufficient for drawing causal conclusions from ecological data. Instead, we advocate for multivariate alternatives using statistical data transformations and regression techniques that take the special structure of the compositional sample space into account. In a comparative analysis on synthetic and semi-synthetic data we show the advantages and limitations of our proposal. We posit that our framework may provide a useful starting point for cause-effect estimation in the context of compositional data.

Show Abstract
June 21, 2021

Rank-normalization, folding, and localization: An improved \(R\) for assessing convergence of MCMC

Aki Vehtari, Andrew Gelman, Daniel Simpson, B. Carpenter, Paul-Christian Bürkner

Markov chain Monte Carlo is a key computational tool in Bayesian statistics, but it can be challenging to monitor the convergence of an iterative stochastic algorithm. In this paper we show that the convergence diagnostic R of Gelman and Rubin (1992) has serious flaws. Traditional R will fail to correctly diagnose convergence failures when the chain has a heavy tail or when the variance varies across the chains. In this paper we propose an alternative rank-based diagnostic that fixes these problems. We also introduce a collection of quantile-based local efficiency measures, along with a practical approach for computing Monte Carlo error estimates for quantiles. We suggest that common trace plots should be replaced with rank plots from multiple chains. Finally, we give recommendations for how these methods should be used in practice.

Show Abstract

Adaptive Monte Carlo augmented with normalizing flows

M. Gabrié, Grant M. Rotskoff, E. Vanden-Eijnden

Many problems in the physical sciences, machine learning, and statistical inference necessitate sampling from a high-dimensional, multi-modal probability distribution. Markov Chain Monte Carlo (MCMC) algorithms, the ubiquitous tool for this task, typically rely on random, reversible, and local updates to propagate configurations of a given system in a way that ensures that generated configurations will be distributed according to a target probability distribution asymptotically. In high-dimensional settings with multiple relevant metastable basins, local approaches require either immense computational effort or intricately designed importance sampling strategies to capture information about, for example, the relative populations of such basins. Here we analyze a framework for augmenting MCMC sampling with nonlocal transition kernels parameterized with generative models known as normalizing flows. We focus on a setting where there is no preexisting data, as is commonly the case for problems in which MCMC is used. Our results emphasize that the implementation of the normalizing flow must be adapted to the structure of the target distribution in order to preserve the statistics of the target at all scales. Furthermore, we analyze the propensity of our algorithm to discover new states and demonstrate the importance of initializing the training with some

Show Abstract
arXiv e-prints
May 26, 2021

Towards Low-Photon Nanoscale Imaging: Holographic Phase Retrieval via Maximum Likelihood Optimization

A new algorithmic framework is presented for holographic phase retrieval via maximum likelihood optimization, which allows for practical and robust image reconstruction. This framework is especially well-suited for holographic coherent diffraction imaging in the \textit{low-photon regime}, where data is highly corrupted by Poisson shot noise. Thus, this methodology provides a viable solution towards the advent of \textit{low-photon nanoscale imaging}, which is a fundamental challenge facing the current state of imaging technologies. Practical optimization algorithms are derived and implemented, and extensive numerical simulations demonstrate significantly improved image reconstruction versus the leading algorithms currently in use. Further experiments compare the performance of popular holographic reference geometries to determine the optimal combined physical setup and algorithm pipeline for practical implementation. Additional features of these methods are also demonstrated, which allow for fewer experimental constraints.

Show Abstract
May 24, 2021

cuFINUFFT: a load-balanced GPU library for general-purpose nonuniform FFTs

Yu-hsuan Shih, Garrett Wright, Joakim Andén, Johannes Blaschke, A. Barnett

Nonuniform fast Fourier transforms dominate the computational cost in many applications including image reconstruction and signal processing. We thus present a general-purpose GPU-based CUDA library for type 1 (nonuniform to uniform) and type 2 (uniform to nonuniform) transforms in dimensions 2 and 3, in single or double precision. It achieves high performance for a given user-requested accuracy, regardless of the distribution of nonuniform points, via cache-aware point reordering, and load-balanced blocked spreading in shared memory. At low accuracies, this gives on-GPU throughputs around $10^9$ nonuniform points per second, and (even including host-device transfer) is typically 4-10$\times$ faster than the latest parallel CPU code FINUFFT (at 28 threads). It is competitive with two established GPU codes, being up to 90$\times$ faster at high accuracy and/or type 1 clustered point distributions. Finally we demonstrate a 6-18$\times$ speedup versus CPU in an X-ray diffraction 3D iterative reconstruction task at $10^{-12}$ accuracy, observing excellent multi-GPU weak scaling up to one rank per GPU. paper talk: https://www.youtube-nocookie.com/watch?v=PnW6ehMyHxM.

Show Abstract

AI-assisted super-resolution cosmological simulations

Y. Li, Yueying Ni, Rupert A. C. Croft, Tiziana Di Matteo, Simeon Bird, Yu Feng

Cosmological simulations of galaxy formation are limited by finite computational resources. We draw from the ongoing rapid advances in Artificial Intelligence (specifically Deep Learning) to address this problem. Neural networks have been developed to learn from high-resolution (HR) image data, and then make accurate super-resolution (SR) versions of different low-resolution (LR) images. We apply such techniques to LR cosmological N-body simulations, generating SR versions. Specifically, we are able to enhance the simulation resolution by generating 512 times more particles and predicting their displacements from the initial positions. Therefore our results can be viewed as new simulation realizations themselves rather than projections, e.g., to their density fields. Furthermore, the generation process is stochastic, enabling us to sample the small-scale modes conditioning on the large-scale environment. Our model learns from only 16 pairs of LR-HR simulations, and is then able to generate SR simulations that successfully reproduce the matter power spectrum and the halo mass function of the HR targets. We successfully deploy the model in a box 1000 times larger than the training simulation box, showing that high-resolution mock surveys can be generated rapidly. We conclude that AI assistance has the potential to revolutionize modeling of small-scale galaxy formation physics in large cosmological volumes.

Show Abstract

Recovering missing data in coherent diffraction imaging

In coherent diffraction imaging (CDI) experiments, the intensity of the scattered wave impinging on an object is measured on an array of detectors. This signal can be interpreted as the square of the modulus of the Fourier transform of the unknown scattering density. A beam-stop obstructs the forward scattered wave and, hence, the modulus Fourier data from a neighborhood of k=0 cannot be measured. In this note, we describe a linear method for recovering this unmeasured modulus Fourier data from the measured values and an estimate of the support of the image's autocorrelation function without consideration of phase retrieval. We analyze the conditioning of this problem, which grows exponentially with the modulus of the maximum spatial frequency not measured, and the effects of noise.

Show Abstract

Low rank compression in the numerical solution of the nonequilibrium Dyson equation

We propose a method to improve the computational and memory efficiency of numerical solvers for the nonequilibrium Dyson equation in the Keldysh formalism. It is based on the empirical observation that the nonequilibrium Green's functions and self energies arising in many problems of physical interest, discretized as matrices, have low rank off-diagonal blocks, and can therefore be compressed using a hierarchical low rank data structure. We describe an efficient algorithm to build this compressed representation on the fly during the course of time stepping, and use the representation to reduce the cost of computing history integrals, which is the main computational bottleneck. For systems with the hierarchical low rank property, our method reduces the computational complexity of solving the nonequilibrium Dyson equation from cubic to near quadratic, and the memory complexity from quadratic to near linear. We demonstrate the full solver for the Falicov-Kimball model exposed to a rapid ramp and Floquet driving of system parameters, and are able to increase feasible propagation times substantially. We present examples with 262144 time steps, which would require approximately five months of computing time and 2.2 TB of memory using the direct time stepping method, but can be completed in just over a day on a laptop with less than 4 GB of memory using our method. We also confirm the hierarchical low rank property for the driven Hubbard model in the weak coupling regime within the GW approximation, and in the strong coupling regime within dynamical mean-field theory.

Show Abstract

Fast multipole methods for evaluation of layer potentials with locally-corrected quadratures

L. Greengard, Michael O'Neil, M. Rachh, Felipe Vico

While fast multipole methods (FMMs) are in widespread use for the rapid evaluation of potential fields governed by the Laplace, Helmholtz, Maxwell or Stokes equations, their coupling to high-order quadratures for evaluating layer potentials is still an area of active research. In three dimensions, a number of issues need to be addressed, including the specification of the surface as the union of high-order patches, the incorporation of accurate quadrature rules for integrating singular or weakly singular Green's functions on such patches, and their coupling to the oct-tree data structures on which the FMM separates near and far field interactions. Although the latter is straightforward for point distributions, the near field for a patch is determined by its physical dimensions, not the distribution of discretization points on the surface. Here, we present a general framework for efficiently coupling locally corrected quadratures with FMMs, relying primarily on what are called generalized Gaussian quadratures rules, supplemented by adaptive integration. The approach, however, is quite general and easily applicable to other schemes, such as Quadrature by Expansion (QBX). We also introduce a number of accelerations to reduce the cost of quadrature generation itself, and present several numerical examples of acoustic scattering that demonstrate the accuracy, robustness, and computational efficiency of the scheme. On a single core of an Intel i5 2.3GHz processor, a Fortran implementation of the scheme can generate near field quadrature corrections for between 1000 and 10,000 points per second, depending on the order of accuracy and the desired precision. A Fortran implementation of the algorithm described in this work is available at https://gitlab.com/fastalgorithms/fmm3dbiethis https URL.

Show Abstract
  • Previous Page
  • Viewing
  • Next Page
Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates