298 Publications

Reinforcement learning with function approximation: From linear to nonlinear

Jihao Long, J. Han

Function approximation has been an indispensable component in modern reinforcement learning algorithms designed to tackle problems with large state spaces in high dimensions. This paper reviews recent results on error analysis for these reinforcement learning algorithms in linear or nonlinear approximation settings, emphasizing approximation error and estimation error/sample complexity. We discuss various properties related to approximation error and present concrete conditions on transition probability and reward function under which these properties hold true. Sample complexity analysis in reinforcement learning is more complicated than in supervised learning, primarily due to the distribution mismatch phenomenon. With assumptions on the linear structure of the problem, numerous algorithms in the literature achieve polynomial sample complexity with respect to the number of features, episode length, and accuracy, although the minimax rate has not been achieved yet. These results rely on the $L^∞$ and UCB estimation of estimation error, which can handle the distribution mismatch phenomenon. The problem and analysis become substantially more challenging in the setting of nonlinear function approximation, as both $L^∞$ and UCB estimation are inadequate for bounding the error with a favorable rate in high dimensions. We discuss additional assumption necessary to address the distribution mismatch and derive meaningful results for nonlinear RL problems.

Show Abstract

An equivariant neural operator for developing nonlocal tensorial constitutive models

J. Han, Xu-Hui Zhou, Heng Xiao

Developing robust constitutive models is a fundamental and longstanding problem for accelerating the simulation of complicated physics. Machine learning provides promising tools to construct constitutive models based on various calibration data. In this work, we propose a neural operator to develop nonlocal constitutive models for tensorial quantities through a vector-cloud neural network with equivariance (VCNN-e). The VCNN-e respects all the invariance properties desired by constitutive models, faithfully reflects the region of influence in physics, and is applicable to different spatial resolutions. By design, the model guarantees that the predicted tensor is invariant to the frame translation and ordering (permutation) of the neighboring points. Furthermore, it is equivariant to the frame rotation, i.e., the output tensor co-rotates with the coordinate frame. We evaluate the VCNN-e by using it to emulate the Reynolds stress transport model for turbulent flows, which directly computes the Reynolds stress tensor to close the Reynolds-averaged Navier--Stokes (RANS) equations. The evaluation is performed in two situations: (1) emulating the Reynolds stress model through synthetic data generated from the Reynolds stress transport equations with closure models, and (2) predicting the Reynolds stress by learning from data generated from direct numerical simulations. Such a priori evaluations of the proposed network pave the way for developing and calibrating robust and nonlocal, non-equilibrium closure models for the RANS equations.

Show Abstract

A new provably stable weighted state redistribution algorithm

We propose a practical finite volume method on cut cells using state redistribution. Our algorithm is provably monotone, total variation diminishing, and GKS stable in many situations, and shuts off smoothly as the cut cell size approaches a target value. Our analysis reveals why original state redistribution works so well: it results in a monotone scheme for most configurations, though at times subject to a slightly smaller CFL condition. Our analysis also explains why a pre-merging step is beneficial. We show computational experiments in two and three dimensions.

Show Abstract

Automatic, high-order, and adaptive algorithms for Brillouin zone integration

J. Kaye, Sophie Beck, A. Barnett, Lorenzo Van Muñoz, Olivier Parcollet

We present efficient methods for Brillouin zone integration with a non-zero but possibly very small broadening factor η, focusing on cases in which downfolded Hamiltonians can be evaluated efficiently using Wannier interpolation. We describe robust, high-order accurate algorithms automating convergence to a user-specified error tolerance ϵ, emphasizing an efficient computational scaling with respect to η. After analyzing the standard equispaced integration method, applicable in the case of large broadening, we describe a simple iterated adaptive integration algorithm effective in the small η regime. Its computational cost scales as O(log3(η−1)) as η → 0+ in three dimensions, as opposed to O(η−3) for equispaced integration. We argue that, by contrast, tree-based adaptive integration methods scale only as O(log(η−1)/η2) for typical Brillouin zone integrals. In addition to its favorable scaling, the iterated adaptive algorithm is straightforward to implement, particularly for integration on the irreducible Brillouin zone, for which it avoids the tetrahedral meshes required for tree-based schemes. We illustrate the algorithms by calculating the spectral function of SrVO3 with broadening on the meV scale.

Show Abstract

DeePMD-kit v2: A software package for deep potential models

Jinzhe Zeng, Duo Zhang, J. Han

DeePMD-kit is a powerful open-source software package that facilitates molecular dynamics simulations using machine learning potentials known as Deep Potential (DP) models. This package, which was released in 2017, has been widely used in the fields of physics, chemistry, biology, and material science for studying atomistic systems. The current version of DeePMD-kit offers numerous advanced features, such as DeepPot-SE, attention-based and hybrid descriptors, the ability to fit tensile properties, type embedding, model deviation, DP-range correction, DP long range, graphics processing unit support for customized operators, model compression, non-von Neumann molecular dynamics, and improved usability, including documentation, compiled binary packages, graphical user interfaces, and application programming interfaces. This article presents an overview of the current major version of the DeePMD-kit package, highlighting its features and technical details. Additionally, this article presents a comprehensive procedure for conducting molecular dynamics as a representative application, benchmarks the accuracy and efficiency of different models, and discusses ongoing developments.

Show Abstract

Normative framework for deriving neural networks with multi-compartmental neurons and non-Hebbian plasticity

D. Lipshutz, Y. Bahroun, S. Golkar, A. Sengupta, D. Chklovskii

An established normative approach for understanding the algorithmic basis of neural computation is to derive online algorithms from principled computational objectives and evaluate their compatibility with anatomical and physiological observations. Similarity matching objectives have served as successful starting points for deriving online algorithms that map onto neural networks (NNs) with point neurons and Hebbian/anti-Hebbian plasticity. These NN models account for many anatomical and physiological observations; however, the objectives have limited computational power and the derived NNs do not explain multi-compartmental neuronal structures and non-Hebbian forms of plasticity that are prevalent throughout the brain. In this article, we unify and generalize recent extensions of the similarity matching approach to address more complex objectives, including a large class of unsupervised and self-supervised learning tasks that can be formulated as symmetric generalized eigenvalue problems or nonnegative matrix factorization problems. Interestingly, the online algorithms derived from these objectives naturally map onto NNs with multi-compartmental neurons and local, non-Hebbian learning rules. Therefore, this unified extension of the similarity matching approach provides a normative framework that facilitates understanding multi-compartmental neuronal structures and non-Hebbian plasticity found throughout the brain.

Show Abstract

A fast time domain solver for the equilibrium Dyson equation

J. Kaye, Hugo U. R. Strand

We consider the numerical solution of the real time equilibrium Dyson equation, which is used in calculations of the dynamical properties of quantum many-body systems. We show that this equation can be written as a system of coupled, nonlinear, convolutional Volterra integro-differential equations, for which the kernel depends self-consistently on the solution. As is typical in the numerical solution of Volterra-type equations, the computational bottleneck is the quadratic-scaling cost of history integration. However, the structure of the nonlinear Volterra integral operator precludes the use of standard fast algorithms. We propose a quasilinear-scaling FFT-based algorithm which respects the structure of the nonlinear integral operator. The resulting method can reach large propagation times, and is thus well-suited to explore quantum many-body phenomena at low energy scales. We demonstrate the solver with two standard model systems: the Bethe graph, and the Sachdev-Ye-Kitaev model.

Show Abstract

Conformational heterogeneity and probability distributions from single-particle cryo-electron microscopy

W. S. Wai Shing, Ellen D. Zhong, S. Hanson, E. Thiede, P. Cossio

Single-particle cryo-electron microscopy (cryo-EM) is a technique that takes projection images of biomolecules frozen at cryogenic temperatures. A major advantage of this technique is its ability to image single biomolecules in heterogeneous conformations. While this poses a challenge for data analysis, recent algorithmic advances have enabled the recovery of heterogeneous conformations from the noisy imaging data. Here, we review methods for the reconstruction and heterogeneity analysis of cryo-EM images, ranging from linear-transformation-based methods to nonlinear deep generative models. We overview the dimensionality-reduction techniques used in heterogeneous 3D reconstruction methods and specify what information each method can infer from the data. Then, we review the methods that use cryo-EM images to estimate probability distributions over conformations in reduced subspaces or predefined by atomistic simulations. We conclude with the ongoing challenges for the cryo-EM community.

Show Abstract

On Single Index Models beyond Gaussian Data

L. Pillaud-Vivien, Joan Bruna, Ph.D., Aaron Zweig, Ph.D.

Sparse high-dimensional functions have arisen as a rich framework to study the behavior of gradient-descent methods using shallow neural networks, showcasing their ability to perform feature learning beyond linear models. Amongst those functions, the simplest are single-index models $f(x) = \phi( x \cdot \theta^*)$, where the labels are generated by an arbitrary non-linear scalar link function $\phi$ applied to an unknown one-dimensional projection $\theta^*$ of the input data. By focusing on Gaussian data, several recent works have built a remarkable picture, where the so-called information exponent (related to the regularity of the link function) controls the required sample complexity. In essence, these tools exploit the stability and spherical symmetry of Gaussian distributions. In this work, building from the framework of \cite{arous2020online}, we explore extensions of this picture beyond the Gaussian setting, where both stability or symmetry might be violated. Focusing on the planted setting where $\phi$ is known, our main results establish that Stochastic Gradient Descent can efficiently recover the unknown direction $\theta^*$ in the high-dimensional regime, under assumptions that extend previous works \cite{yehudai2020learning,wu2022learning}.

Show Abstract

SGD with Large Step Sizes Learns Sparse Features

Maksym Andriushchenko, Aditya Varre, L. Pillaud-Vivien, Nicolas Flammarion

We showcase important features of the dynamics of the Stochastic Gradient Descent (SGD) in the training of neural networks. We present empirical observations that commonly used large step sizes (i) may lead the iterates to jump from one side of a valley to the other causing loss stabilization, and (ii) this stabilization induces a hidden stochastic dynamics that biases it implicitly toward simple predictors. Furthermore, we show empirically that the longer large step sizes keep SGD high in the loss landscape valleys, the better the implicit regularization can operate and find sparse representations. Notably, no explicit regularization is used: the regularization effect comes solely from the SGD dynamics influenced by the large step sizes schedule. Therefore, these observations unveil how, through the step size schedules, both gradient and noise drive together the SGD dynamics through the loss landscape of neural networks. We justify these findings theoretically through the study of simple neural network models as well as qualitative arguments inspired from stochastic processes. This analysis allows us to shed new light on some common practices and observed phenomena when training deep networks. The code of our experiments is available at https://github.com/tml-epfl/sgd-sparse-features.

Show Abstract
  • Previous Page
  • Viewing
  • Next Page
Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates