Machine Learning at the Flatiron Institute Seminar: Phillip Isola

Date & Time


Location

Flatiron Institute
162 5th Avenue
New York, NY 10010 United States

View Map

Title: Neural Thickets: Diverse Task Experts Are Dense Around Pretrained Weights

Abstract: Pretraining produces a learned parameter vector that is typically treated as a starting point for further iterative adaptation. In this work, we instead view the outcome of pretraining as a distribution over parameter vectors, whose support already contains task-specific experts. We show that in small models such expert solutions occupy a negligible fraction of the volume of this distribution, making their discovery reliant on structured optimization methods such as gradient descent. In contrast, in large, well-pretrained models the density of task-experts increases dramatically, so that diverse, task-improving specialists populate a substantial fraction of the neighborhood around the pretrained weights. Motivated by this perspective, we explore a simple, fully parallel post-training method that samples N parameter perturbations at random, selects the top K, and ensembles predictions via majority vote. Despite its simplicity, this approach is competitive with standard post-training methods such as PPO, GRPO, and ES for contemporary large-scale models.

Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates