Mathematics of Deep Learning Seminar: Jiequn Han

Date & Time


Title: Perturbational Complexity by Distribution Mismatch: A Systematic Analysis of Reinforcement Learning in Reproducing Kernel Hilbert Space

Abstract: Most existing theoretical analysis of reinforcement learning (RL) is limited to the tabular setting or linear models due to the difficulty in dealing with function approximation in high dimensional space with an uncertain environment. This talk will offer some fresh insight into this challenge by analyzing reinforcement learning in a general reproducing kernel Hilbert space (RKHS). We consider a family of Markov decision processes of which the reward functions lie in the unit ball of an RKHS and transition probabilities lie in an arbitrary set. We define a quantity called “perturbational complexity by distribution mismatch” to characterize the complexity of the admissible state-action distribution space in response to a perturbation in the RKHS. We show that the perturbational complexity gives both the lower bound of the error of all possible algorithms and the upper bound of two specific algorithms for the RL problem. Hence, the decay of perturbational complexity with respect to the perturbation scale gives an informative difficulty measure of the RL problem. We will provide some concrete examples and discuss whether the complexity decays fast or not in these examples.

Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates