381 Publications

Escaping saddle points efficiently with occupation-time-adapted perturbation

Xin Guo , J. Han, Mahan Tajrobehkar , Wenpin Tang

Motivated by the super-diffusivity of self-repelling random walk, which has roots in statistical physics, this paper develops a new perturbation mechanism for optimization algorithms. In this mechanism, perturbations are adapted to the history of states via the notion of occupation time. After integrating this mechanism into the framework of perturbed gradient descent (PGD) and perturbed accelerated gradient descent (PAGD), two new algorithms are proposed: perturbed gradient descent adapted to occupation time (PGDOT) and its accelerated version (PAGDOT). PGDOT and PAGDOT are guaranteed to avoid getting stuck at non-degenerate saddle points, and are shown to converge to second-order stationary points at least as fast as PGD and PAGD, respectively. The theoretical analysis is corroborated by empirical studies in which the new algorithms consistently escape saddle points and outperform not only their counterparts, PGD and PAGD, but also other popular alternatives including stochastic gradient descent, Adam, and several state-of-the-art adaptive gradient methods.

Show Abstract

Statistical Component Separation for Targeted Signal Recovery in Noisy Mixtures

B. Régaldo-Saint Blancard, M. Eickenberg

Separating signals from an additive mixture may be an unnecessarily hard problem when one is only interested in specific properties of a given signal. In this work, we tackle simpler "statistical component separation" problems that focus on recovering a predefined set of statistical descriptors of a target signal from a noisy mixture. Assuming access to samples of the noise process, we investigate a method devised to match the statistics of the solution candidate corrupted by noise samples with those of the observed mixture. We first analyze the behavior of this method using simple examples with analytically tractable calculations. Then, we apply it in an image denoising context employing 1) wavelet-based descriptors, 2) ConvNet-based descriptors on astrophysics and ImageNet data. In the case of 1), we show that our method better recovers the descriptors of the target data than a standard denoising method in most situations. Additionally, despite not constructed for this purpose, it performs surprisingly well in terms of peak signal-to-noise ratio on full signal reconstruction. In comparison, representation 2) appears less suitable for image denoising. Finally, we extend this method by introducing a diffusive stepwise algorithm which gives a new perspective to the initial method and leads to promising results for image denoising under specific circumstances.

Show Abstract

Nonlinear spiked covariance matrices and signal propagation in deep neural networks

Zhichao Wang, D. Wu, Zhou Fan

Many recent works have studied the eigenvalue spectrum of the Conjugate Kernel (CK) def ined by the nonlinear feature map of a feedforward neural network. However, existing results only establish weak convergence of the empirical eigenvalue distribution, and fall short of providing precise quantitative characterizations of the “spike” eigenvalues and eigenvectors that often capture the low-dimensional signal structure of the learning problem. In this work, we characterize these signal eigenvalues and eigenvectors for a nonlinear version of the spiked covariance model, including the CK as a special case. Using this general result, we give a quantitative description of how spiked eigenstructure in the input data propagates through the hidden layers of a neural network with random weights. As a second application, we study a simple regime of representation learning where the weight matrix develops a rank-one signal component over training and characterize the alignment of the target function with the spike eigenvector of the CK on test data.

Show Abstract

For how many iterations should we run Markov chain Monte Carlo?

C. Margossian, Andrew Gelman

Standard Markov chain Monte Carlo (MCMC) admits three fundamental control parameters: the number of chains, the length of the warmup phase, and the length of the sampling phase. These control parameters play a large role in determining the amount of computation we deploy. In practice, we need to walk a line between achieving sufficient precision and not wasting precious computational resources and time. We review general strategies to check the length of the warmup and sampling phases, and examine the three control parameters of MCMC in the contexts of CPU- and GPU-based hardware. Our discussion centers around three tasks: (1) inference about a latent variable, (2) computation of expectation values and quantiles, and (3) diagnostics to check the reliability of the estimators. This chapter begins with general recommendations on the control parameters of MCMC, which have been battle-tested over the years and often motivate defaults in Bayesian statistical software. Usually we do not know ahead of time how a sampler will interact with a target distribution, and so the choice of MCMC algorithm and its control parameters, tend to be based on experience, re-evaluated after simulations have been obtained and analyzed. The second part of this chapter provides a theoretical motivation for our recommended approach, with pointers to some concerns and open problems. We also examine recent developments on the algorithmic and hardware fronts, which motivate new computational approaches to MCMC.

Show Abstract

Deep reinforcement learning in finite-horizon to explore the most probable transition pathway

Jin Guo, Ting Gao, Peng Zhang, J. Han, Jinqiao Duan

In many scientific and engineering problems, noise and nonlinearity are unavoidable, which could induce interesting mathematical problem such as transition phenomena. This paper focuses on efficiently discovering the most probable transition pathway for stochastic dynamical systems employing reinforcement learning. With the Onsager–Machlup action functional theory to quantify rare events in stochastic dynamical systems, finding the most probable pathway is equivalent to solving a variational problem on the action functional. When the action function cannot be explicitly expressed by paths near the reference orbit, the variational problem needs to be converted into an optimal control problem. First, by integrating terminal prediction into the reinforcement learning framework, we develop a Terminal Prediction Deep Deterministic Policy Gradient (TP-DDPG) algorithm to deal with the finite-horizon optimal control issue in a forward way. Next, we present the convergence analysis of our algorithm for the value function in terms of the neural network’s approximation error and estimation error. Finally, we conduct various experiments in different dimensions for the transition problems in applications to illustrate the effectiveness of our algorithm.

Show Abstract

Envelopes of Horospheres and Weingarten Surfaces in Hyperbolic 3-Space

We derive basic differential geometric formulae for surfaces in hyperbolic space represented as envelopes of horospheres. The dual notion of parallel hypersurfaces is also studied. The representation is applied to prove existence and regularity theorems for Weingarten surfaces in H^3, which satisfy (1-a)K = a(2-H), for an a < 0, and have a specified boundary curve at infinity. These surfaces are shown to be closely connected to conformal mappings of domains in S^2 into the unit disk and provide Riemannian interpretations for some conformal invariants associated to such mappings.
This paper was originally written in 1984, before I learned to use TeX, and was typed by one of the secretaries in the Princeton Math Department. It was more or less, my first original work after my dissertation. For some reason, I was not able to get this paper published in a timely manner. The results and perspective in this paper have proved to be useful to a variety of people, some of whom asked me to render the article into TeX and post it to the arXiv. I had been seriously thinking about doing this, when Martin Bridgemen sent me a transcription of my original article into TeX. I am extremely grateful to him for the effort he has put into this project.
The paper is now formatted in a more or less modern AMS-article style, but for lots of additional punctuation, a few corrections and some minor stylistic changes, the content has been largely reproduced as it originally was. Remarks about the 'state-of-the-art' in hyperbolic geometry are obviously way out of date, as there has been enormous progress in many aspects of this still rich subject.

Show Abstract

The magnetic gradient scale length explains why certain plasmas require close external magnetic coils

John Kappel, Matt Landreman, D. Malhotra

The separation between the last closed flux surface of a plasma and the external coils that magnetically confine it is a limiting factor in the construction of fusion-capable plasma devices. This plasma-coil separation must be large enough so that components such as a breeding blanket and neutron shielding can fit between the plasma and the coils. Plasma-coil separation affects reactor size, engineering complexity, and particle loss due to field ripple. For some plasmas it can be difficult to produce the desired flux surface shaping with distant coils, and for other plasmas it is infeasible altogether. Here, we seek to understand the underlying physics that limits plasma-coil separation and explain why some configurations require close external coils. In this paper, we explore the hypothesis that the limiting plasma-coil separation is set by the shortest scale length of the magnetic field as expressed by the tensor. We tested this hypothesis on a database of 40 stellarator and tokamak configurations. Within this database, the coil-to-plasma distance compared to the minor radius varies by over an order of magnitude. The magnetic scale length is well correlated to the coil-to-plasma distance of actual coil designs generated using the REGCOIL method (Landreman 2017 Nucl. Fusion 57 046003). Additionally, this correlation reveals a general trend that larger plasma-coil separation is possible with a small number of field periods.

Show Abstract

Generalization in diffusion models arises from geometry-adaptive harmonic representations

Zahra Kadkhodaie, Florentin Guth, E. P. Simoncelli, S. Mallat

Deep neural networks (DNNs) trained for image denoising are able to generate high-quality samples with score-based reverse diffusion algorithms. These impressive capabilities seem to imply an escape from the curse of dimensionality, but recent reports of memorization of the training set raise the question of whether these networks are learning the "true" continuous density of the data. Here, we show that two DNNs trained on non-overlapping subsets of a dataset learn nearly the same score function, and thus the same density, when the number of training images is large enough. In this regime of strong generalization, diffusion-generated images are distinct from the training set, and are of high visual quality, suggesting that the inductive biases of the DNNs are well-aligned with the data density. We analyze the learned denoising functions and show that the inductive biases give rise to a shrinkage operation in a basis adapted to the underlying image. Examination of these bases reveals oscillating harmonic structures along contours and in homogeneous regions. We demonstrate that trained denoisers are inductively biased towards these geometry-adaptive harmonic bases since they arise not only when the network is trained on photographic images, but also when it is trained on image classes supported on low-dimensional manifolds for which the harmonic basis is suboptimal. Finally, we show that when trained on regular image classes for which the optimal basis is known to be geometry-adaptive and harmonic, the denoising performance of the networks is near-optimal.

Show Abstract

Scaling Laws for Associative Memories

Vivien Cabannes , Elvis Dohmatob, A. Bietti

Learning arguably involves the discovery and memorization of abstract rules. The aim of this paper is to study associative memory mechanisms. Our model is based on high-dimensional matrices consisting of outer products of embeddings, which relates to the inner layers of transformer language models. We derive precise scaling laws with respect to sample size and parameter size, and discuss the statistical efficiency of different estimators, including optimization-based algorithms. We provide extensive numerical experiments to validate and interpret theoretical results, including fine-grained visualizations of the stored memory associations.

Show Abstract

Should Under-parameterized Student Networks Copy or Average Teacher Weights?

B. Şimşek, Amire Bendjeddou, Wulfram Gerstner, Johanni Brea

Any continuous function f∗ can be approximated arbitrarily well by a neural network with sufficiently many neurons k. We consider the case when f∗ itself is a neural network with one hidden layer and k neurons. Approximating f∗ with a neural network with n < k neurons can thus be seen as fitting an under-parameterized “student” network with n neurons to a “teacher” network with k neurons. As the student has fewer neurons than the teacher, it is unclear, whether each of the n student neurons should copy one of the teacher neurons or rather average a group of teacher neurons. For shallow neural networks with erf activation function and for the standard Gaussian input distribution, we prove that “copy-average” configurations are critical points if the teacher’s incoming vectors are orthonormal and its outgoing weights are unitary. Moreover, the optimum among such configurations is reached when n − 1 student neurons each copy one teacher neuron and the n-th student neuron averages the remaining k − n + 1 teacher neurons. For the student network with n = 1 neuron, we provide additionally a closed-form solution of the non-trivial critical point(s) for commonly used activation functions through solving an equivalent constrained optimization problem. Empirically, we find for the erf activation function that gradient flow converges either to the optimal copy-average critical point or to another point where each student neuron approximately copies a different teacher neuron. Finally, we find similar results for the ReLU activation function, suggesting that the optimal solution of underparameterized networks has a universal structure.

Show Abstract
  • Previous Page
  • Viewing
  • Next Page
Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates

privacy consent banner

Privacy preference

We use cookies to provide you with the best online experience. By clicking "Accept All," you help us understand how our site is used and enhance its performance. You can change your choice at any time here. To learn more, please visit our Privacy Policy.