428 Publications

Approximating the Gaussian as a Sum of Exponentials and Its Applications to the Fast Gauss Transform

We develop efficient and accurate sum-of-exponential (SOE) approximations for the Gaussian using rational approximation of the exponential function on the negative real axis. Six digit accuracy can be obtained with eight terms and ten digit accuracy can be obtained with twelve terms. This representation is of potential interest in approximation theory but we focus here on its use in accelerating the fast Gauss transform (FGT) in one and two dimensions. The one-dimensional scheme is particularly straightforward and easy to implement, requiring only twenty-four lines of MATLAB code. The two-dimensional version requires some care with data structures, but is significantly more efficient than existing FGTs. Following a detailed presentation of the theoretical foundations, we demonstrate the performance of the fast transforms with several numerical experiments.

Show Abstract

Quantum Initial Conditions for Curved Inflating Universes

Mary I Letey, Zakhar Shumaylov, F. Agocs, Will J Handley, Michael P Hobson, Anthony N Lasenby

We discuss the challenges of motivating, constructing, and quantising a canonically-normalised inflationary perturbation in spatially curved universes. We show that this has historically proved challenging due to the interaction of non-adiabaticity with spatial curvature. We propose a novel curvature perturbation which is canonically normalised, unique up to a single scalar parameter. This corrected quantisation has potentially observational consequences via modifications to the primordial power spectrum at large angular scales, as well as theoretical implications for quantisation procedures in curved cosmologies filled with a scalar field.

Show Abstract
November 30, 2022

Corner Cases of the Generalized Tau Method

Keaton J. Burns, D. Fortunato , Keith Julien, Geoffrey M. Vasil

Polynomial spectral methods provide fast, accurate, and flexible solvers for broad ranges of PDEs with one bounded dimension, where the incorporation of general boundary conditions is well understood. However, automating extensions to domains with multiple bounded dimensions is challenging because of difficulties in implementing boundary conditions and imposing compatibility conditions at shared edges and corners. Past work has included various workarounds, such as the anisotropic inclusion of partial boundary data at shared edges or approaches that only work for specific boundary conditions. Here we present a general system for imposing boundary and compatibility conditions for elliptic equations on hypercubes. We take an approach based on the generalized tau method, which allows for a wide range of boundary conditions for many types of spectral methods. The generalized tau method has the distinct advantage that the specified polynomial residual determines the exact algebraic solution; afterwards, any stable numerical scheme will find the same result. We can, therefore, provide one-to-one comparisons to traditional collocation and Galerkin methods within the tau framework. As an essential requirement, we add specific tau corrections to the boundary conditions in addition to the bulk PDE. We then impose additional mutual compatibility conditions to ensure boundary conditions match at shared subsurfaces. Our approach works with general boundary conditions that commute on intersecting subsurfaces, including Dirichlet, Neumann, Robin, and any combination of these on all boundaries. The tau corrections and compatibility conditions can be fully isotropic and easily incorporated into existing solvers. We present the method explicitly for the Poisson equation in two and three dimensions and describe its extension to arbitrary elliptic equations (e.g. biharmonic) in any dimension.

Show Abstract

An FMM Accelerated Poisson Solver for Complicated Geometries in the Plane using Function Extension

Fredrik Fryklund, L. Greengard

We describe a new, adaptive solver for the two-dimensional Poisson equation in complicated geometries. Using classical potential theory, we represent the solution as the sum of a volume potential and a double layer potential. Rather than evaluating the volume potential over the given domain, we first extend the source data to a geometrically simpler region with high order accuracy. This allows us to accelerate the evaluation of the volume potential using an efficient, geometry-unaware fast multipole-based algorithm. To impose the desired boundary condition, it remains only to solve the Laplace equation with suitably modified boundary data. This is accomplished with existing fast and accurate boundary integral methods. The novelty of our solver is the scheme used for creating the source extension, assuming it is provided on an adaptive quad-tree. For leaf boxes intersected by the boundary, we construct a universal "stencil" and require that the data be provided at the subset of those points that lie within the domain interior. This universality permits us to precompute and store an interpolation matrix which is used to extrapolate the source data to an extended set of leaf nodes with full tensor-product grids on each. We demonstrate the method's speed, robustness and high-order convergence with several examples, including domains with piecewise smooth boundaries.

Show Abstract

Differentiable Cosmological Simulation with Adjoint Method

Y. Li, C. Modi, Drew Jamieson, Yucheng Zhang, L. Lu, Yu Feng, François Lanusse, L. Greengard

Rapid advances in deep learning have brought not only myriad powerful neural networks, but also breakthroughs that benefit established scientific research. In particular, automatic differentiation (AD) tools and computational accelerators like GPUs have facilitated forward modeling of the Universe with differentiable simulations. Current differentiable cosmological simulations are limited by memory, thus are subject to a trade-off between time and space/mass resolution. They typically integrate for only tens of time steps, unlike the standard non-differentiable simulations. We present a new approach free of such constraints, using the adjoint method and reverse time integration. It enables larger and more accurate forward modeling, and will improve gradient based optimization and inference. We implement it in a particle-mesh (PM) N-body library pmwd (particle-mesh with derivatives). Based on the powerful AD system JAX, pmwd is fully differentiable, and is highly performant on GPUs.

Show Abstract

pmwd: A Differentiable Cosmological Particle-Mesh N-body Library

Y. Li, L. Lu, C. Modi, Drew Jamieson, Yucheng Zhang, Yu Feng, W. Zhou, Ngai Pok Kwan, François Lanusse, L. Greengard

The formation of the large-scale structure, the evolution and distribution of galaxies, quasars, and dark matter on cosmological scales, requires numerical simulations. Differentiable simulations provide gradients of the cosmological parameters, that can accelerate the extraction of physical information from statistical analyses of observational data. The deep learning revolution has brought not only myriad powerful neural networks, but also breakthroughs including automatic differentiation (AD) tools and computational accelerators like GPUs, facilitating forward modeling of the Universe with differentiable simulations. Because AD needs to save the whole forward evolution history to backpropagate gradients, current differentiable cosmological simulations are limited by memory. Using the adjoint method, with reverse time integration to reconstruct the evolution history, we develop a differentiable cosmological particle-mesh (PM) simulation library pmwd (particle-mesh with derivatives) with a low memory cost. Based on the powerful AD library JAX, pmwd is fully differentiable, and is highly performant on GPUs.

Show Abstract

Learning Feynman Diagrams with Tensor Trains

Yuriel Nunez-Fernandez, Matthieu Jeannin, Philipp T. Dumitrescu, Thomas Kloss, J. Kaye, Olivier Parcollet, Xavier Waintal

We use tensor network techniques to obtain high-order perturbative diagrammatic expansions for the quantum many-body problem at very high precision. The approach is based on a tensor train parsimonious representation of the sum of all Feynman diagrams, obtained in a controlled and accurate way with the tensor cross interpolation algorithm. It yields the full time evolution of physical quantities in the presence of any arbitrary time-dependent interaction. Our benchmarks on the Anderson quantum impurity problem, within the real-time nonequilibrium Schwinger-Keldysh formalism, demonstrate that this technique supersedes diagrammatic quantum Monte Carlo by orders of magnitude in precision and speed, with convergence rates \(1/N2\) or faster, where N is the number of function evaluations. The method also works in parameter regimes characterized by strongly oscillatory integrals in high dimension, which suffer from a catastrophic sign problem in quantum Monte Carlo calculations. Finally, we also present two exploratory studies showing that the technique generalizes to more complex situations: a double quantum dot and a single impurity embedded in a two-dimensional lattice.

Show Abstract

libdlr: Efficient imaginary time calculations using the discrete Lehmann representation

J. Kaye, Kun Chen , Hugo U. R. Strand

We introduce libdlr, a library implementing the recently introduced discrete Lehmann representation (DLR) of imaginary time Green's functions. The DLR basis consists of a collection of exponentials chosen by the interpolative decomposition to ensure stable and efficient recovery of Green's functions from imaginary time or Matsubara frequency samples. The library provides subroutines to build the DLR basis and grids, and to carry out various standard operations. The simplicity of the DLR makes it straightforward to incorporate into existing codes as a replacement for less efficient representations of imaginary time Green's functions, and libdlr is intended to facilitate this process. libdlr is written in Fortran, provides a C header interface, and contains a Python module pydlr. We also introduce a stand-alone Julia implementation, Lehmann.jl

Show Abstract

High-Resolution EEG Source Reconstruction with Boundary Element Fast Multipole Method Using Reciprocity Principle and TES Forward Model Matrix

William A. Wartman, Tommi Raij, M. Rachh, Fa-Hsuan Lin, Konstantin Weise, Thomas Knoesche, Burkhard Maess, Carsten H. Wolters, Aapo R. Nummenmaa, Sergey N. Makaroff, Matti Hämäläinen

Background Accurate high-resolution EEG source reconstruction (localization) is important for several tasks, including rigorous and rapid mental health screening.Objective The present study has developed, validated, and applied a new source localization algorithm utilizing a charge-based boundary element fast multipole method (BEM-FMM) coupled with the Helmholtz reciprocity principle and the transcranial electrical stimulation (TES) forward solution.Methods The unknown cortical dipole density is reconstructed over the entire cortical surface by expanding into global basis functions in the form of cortical fields of active TES electrode pairs. These pairs are constructed from the reading electrodes. An analog of the minimum norm estimation (MNE) equation is obtained after substituting this expansion into the reciprocity principle written in terms of measured electrode voltages. Delaunay (geometrically balanced) triangulation of the electrode cap is introduced first. Basis functions for all electrode pairs connected by the edges of a triangular mesh are precomputed and stored in memory. A smaller set of independent basis functions is then selected and employed at every time instant. This set is based on the highest voltage differences measured.Results The method is validated against the classic, yet challenging problem of median nerve stimulation and the tangential cortical sources located at the posterior wall of the central sulcus for an N20/P20 peak (2 scanned subjects). The method is further applied to perform source reconstruction of synthesized tangential cortical sources located at the posterior wall of the central sulcus (12 different subjects). In the second case, an average source reconstruction error of 7 mm is reported for the best possible noiseless scenario.Conclusions Once static preprocessing with TES electrodes has been done (the basis functions have been computed), our method requires fractions of a second to complete the accurate high-resolution source localization.Competing Interest StatementThe authors have declared no competing interest.

Show Abstract
bioRxiv
November 1, 2022

Catching actin proteins in action

P. Cossio, Glen M. Hocky

Two groups have visualized actin — the protein polymer that gives cells their shape — at high resolution. The structures provide in-depth views of the polymer as it adopts fleeting states and undergoes conformational changes.

Show Abstract
  • Previous Page
  • Viewing
  • Next Page
Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates