381 Publications

Solution of Stokes flow in complex nonsmooth 2D geometries via a linear-scaling high-order adaptive integral equation scheme

B. Wu, H. Zhu, A. Barnett, S. Veerapaneni

We present a fast, high-order accurate and adaptive boundary integral scheme for solving the Stokes equations in complex---possibly nonsmooth---geometries in two dimensions. The key ingredient is a set of panel quadrature rules capable of evaluating weakly-singular, nearly-singular and hyper-singular integrals to high accuracy. Near-singular integral evaluation, in particular, is done using an extension of the scheme developed in J.~Helsing and R.~Ojala, {\it J. Comput. Phys.} {\bf 227} (2008) 2899--2921. The boundary of the given geometry is ``panelized'' automatically to achieve user-prescribed precision. We show that this adaptive panel refinement procedure works well in practice even in the case of complex geometries with large number of corners. In one example, for instance, a model 2D vascular network with 378 corners required less than 200K discretization points to obtain a 9-digit solution accuracy.

Show Abstract

On the solution of Laplace’s equation in the vicinity of triple junctions

Jeremy Hoskins, M. Rachh

An important component of many image alignment methods is the calculation of inner products (correlations) between an image of $n\times n$ pixels and another image translated by some shift and rotated by some angle. For robust alignment of an image pair, the number of considered shifts and angles is typically high, thus the inner product calculation becomes a bottleneck. Existing methods, based on fast Fourier transforms (FFTs), compute all such inner products with computational complexity $\mathcal{O}(n^3 \log n)$ per image pair, which is reduced to $\mathcal{O}(N n^2)$ if only $N$ distinct shifts are needed. We propose to use a factorization of the translation kernel (FTK), an optimal interpolation method which represents images in a Fourier--Bessel basis and uses a rank-$H$ approximation of the translation kernel via an operator singular value decomposition (SVD). Its complexity is $\mathcal{O}(Hn(n + N))$ per image pair. We prove that $H = \mathcal{O}((W + \log(1/\epsilon))^2)$, where $2W$ is the magnitude of the maximum desired shift in pixels and $\epsilon$ is the desired accuracy. For fixed $W$ this leads to an acceleration when $N$ is large, such as when sub-pixel shift grids are considered. Finally, we present numerical results in an electron cryomicroscopy application showing speedup factors of $3$-$10$ with respect to the state of the art.

Show Abstract

SpikeForest, reproducible web-facing ground-truth validation of automated neural spike sorters

J. Magland, J. Jun, E. Lovero, A. J. Morley, C. L. Hurwitz, A. P. Buccino, S. Garcia, A. Barnett

Spike sorting is a crucial step in electrophysiological studies of neuronal activity. While many spike sorting packages are available, there is little consensus about which are most accurate under different experimental conditions. SpikeForest is an open-source and reproducible software suite that benchmarks the performance of automated spike sorting algorithms across an extensive, curated database of ground-truth electrophysiological recordings, displaying results interactively on a continuously-updating website. With contributions from eleven laboratories, our database currently comprises 650 recordings (1.3 TB total size) with around 35,000 ground-truth units. These data include paired intracellular/extracellular recordings and state-of-the-art simulated recordings. Ten of the most popular spike sorting codes are wrapped in a Python package and evaluated on a compute cluster using an automated pipeline. SpikeForest documents community progress in automated spike sorting, and guides neuroscientists to an optimal choice of sorter and parameters for a wide range of probes and brain regions.

Show Abstract

Low-Rank Toeplitz Matrix Estimation Via Random Ultra-Sparse Rulers

H. Lawrence, Jerry Li, Cameron Musco, Christopher Musco

We study how to estimate a nearly low-rank Toeplitz covariance matrix T from compressed measurements. Recent work of Qiao and Pal addresses this problem by combining sparse rulers (sparse linear arrays) with frequency finding (sparse Fourier transform) algorithms applied to the Vandermonde decomposition of T. Analytical bounds on the sample complexity are shown, under the assumption of sufficiently large gaps between the frequencies in this decomposition. In this work, we introduce random ultra-sparse rulers and propose an improved approach based on these objects. Our random rulers effectively apply a random permutation to the frequencies in T's Vandermonde decomposition, letting us avoid frequency gap assumptions and leading to improved sample complexity bounds. In the special case when T is circulant, we theoretically analyze the performance of our method when combined with sparse Fourier transform algorithms based on random hashing. We also show experimentally that our ultra-sparse rulers give significantly more robust and sample efficient estimation then baseline methods.

Show Abstract

Mean-field inference methods for neural networks

Machine learning algorithms relying on deep neural networks recently allowed a great leap forward in artificial intelligence. Despite the popularity of their applications, the efficiency of these algorithms remains largely unexplained from a theoretical point of view. The mathematical description of learning problems involves very large collections of interacting random variables, difficult to handle analytically as well as numerically. This complexity is precisely the object of study of statistical physics. Its mission, originally pointed toward natural systems, is to understand how macroscopic behaviors arise from microscopic laws. Mean-field methods are one type of approximation strategy developed in this view. We review a selection of classical mean-field methods and recent progress relevant for inference in neural networks. In particular, we remind the principles of derivations of high-temperature expansions, the replica method and message passing algorithms, highlighting their equivalences and complementarities. We also provide references for past and current directions of research on neural networks relying on mean-field methods.

Show Abstract

Natural evolution strategies and variational Monte Carlo

Tianchen Zhao, G. Carleo, J. Stokes, Shravan Veerapaneni

A notion of quantum natural evolution strategies is introduced, which provides a geometric synthesis of a number of known quantum/classical algorithms for performing classical black-box optimization. Recent work of Gomes et al. [2019] on combinatorial optimization using neural quantum states is pedagogically reviewed in this context, emphasizing the connection with natural evolution strategies. The algorithmic framework is illustrated for approximate combinatorial optimization problems, and a systematic strategy is found for improving the approximation ratios. In particular it is found that natural evolution strategies can achieve state-of-art approximation ratios for Max-Cut, at the expense of increased computation time.

Show Abstract

The general theory of permutation equivarant neural networks and higher order graph variational encoders

E. Thiede, Truong Son Hy, R. Kondor

Previous work on symmetric group equivariant neural networks generally only considered the case where the group acts by permuting the elements of a single vector. In this paper we derive formulae for general permutation equivariant layers, including the case where the layer acts on matrices by permuting their rows and columns simultaneously. This case arises naturally in graph learning and relation learning applications. As a specific case of higher order permutation equivariant networks, we present a second order graph variational encoder, and show that the latent distribution of equivariant generative models must be exchangeable. We demonstrate the efficacy of this architecture on the tasks of link prediction in citation graphs and molecular graph generation.

Show Abstract
arXiv e-prints
April 8, 2020

Shrinkage improves estimation of microbial associations under different normalization methods

M Badri, Z Kurtz, R. Bonneau, C. Müller

Consistent estimation of associations in microbial genomic survey count data is fundamental to microbiome research. Technical limitations, including compositionality, low sample sizes, and technical variability, obstruct standard application of association measures and require data normalization prior to estimating associations. Here, we investigate the interplay between data normalization and microbial association estimation by a comprehensive analysis of statistical consistency. Leveraging the large sample size of the American Gut Project (AGP), we assess the consistency of the two prominent linear association estimators, correlation and proportionality, under different sample scenarios and data normalization schemes, including RNA-seq analysis work flows and log-ratio transformations. We show that shrinkage estimation, a standard technique in high-dimensional statistics, can universally improve the quality of association estimates for microbiome data. We find that large-scale association patterns in the AGP data can be grouped into five normalization-dependent classes. Using microbial association network construction and clustering as examples of exploratory data analysis, we show that variance-stabilizing and log-ratio approaches provide for the most consistent estimation of taxonomic and structural coherence. Taken together, the findings from our reproducible analysis workflow have important implications for microbiome studies in multiple stages of analysis, particularly when only small sample sizes are available.

Show Abstract
April 4, 2020

Deep Network Classification by Scattering and Homotopy Dictionary Learning

John Zarka, Louis Thiry, Tomás Angles, S. Mallat

We introduce a sparse scattering deep convolutional neural network, which provides a simple model to analyze properties of deep representation learning for classification. Learning a single dictionary matrix with a classifier yields a higher classification accuracy than AlexNet over the ImageNet 2012 dataset. The network first applies a scattering transform that linearizes variabilities due to geometric transformations such as translations and small deformations. A sparse $\ell^1$ dictionary coding reduces intra-class variability while preserving class separation through projections over unions of linear spaces. It is implemented in a deep convolutional network with a homotopy algorithm having an exponential convergence. A convergence proof is given in a general framework that includes ALISTA. Classification results are analyzed on ImageNet.

Show Abstract

Hyper-molecules: on the representation and recovery of dynamical structures for applications in flexible macro-molecules in cryo-EM

Roy R Lederman, J. Andén, Amit Singer

Cryo-electron microscopy (cryo-EM), the subject of the 2017 Nobel Prize in Chemistry, is a technology for determining the 3-D structure of macromolecules from many noisy 2-D projections of instances of these macromolecules, whose orientations and positions are unknown. The molecular structures are not rigid objects, but flexible objects involved in dynamical processes. The different conformations are exhibited by different instances of the macromolecule observed in a cryo-EM experiment, each of which is recorded as a particle image. The range of conformations and the conformation of each particle are not known a priori; one of the great promises of cryo-EM is to map this conformation space. Remarkable progress has been made in determining rigid structures from homogeneous samples of molecules in spite of the unknown orientation of each particle image and significant progress has been made in recovering a few distinct states from mixtures of rather distinct conformations, but more complex heterogeneous samples remain a major challenge. We introduce the ``hyper-molecule'' framework for modeling structures across different states of heterogeneous molecules, including continuums of states. The key idea behind this framework is representing heterogeneous macromolecules as high-dimensional objects, with the additional dimensions representing the conformation space. This idea is then refined to model properties such as localized heterogeneity. In addition, we introduce an algorithmic framework for recovering such maps of heterogeneous objects from experimental data using a Bayesian formulation of the problem and Markov chain Monte Carlo (MCMC) algorithms to address the computational challenges in recovering these high dimensional hyper-molecules. We demonstrate these ideas in a prototype applied to synthetic data.

Show Abstract
  • Previous Page
  • Viewing
  • Next Page
Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates

privacy consent banner

Privacy preference

We use cookies to provide you with the best online experience. By clicking "Accept All," you help us understand how our site is used and enhance its performance. You can change your choice at any time here. To learn more, please visit our Privacy Policy.