Mathematics of Deep Learning Seminar: Rachel Ward

Date & Time


Title: Generalization bounds for sparse random feature expansions

Abstract: Random feature methods have been successful in various machine learning tasks, are easy to compute, and come with theoretical accuracy bounds. They serve as an alternative approach to standard neural networks since they can represent similar function spaces without a costly training phase. However, random feature approximations suffer from the curse of dimensionality in terms of number of function evaluations needed to achieve a given accuracy, limiting their use for data-scarce applications or problems in scientific machine learning. To overcome this, we consider random features with sparse random feature weights for approximating low-order functions (function which can be written as a combination of functions, each of only a small number of variables). We provide generalization error and uniform bounds for low-order function approximation via sparse random feature expansions obtained via l1 regression. We show that the sparse random feature expansion outperforms shallow networks in several scientific machine learning tasks.

Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates