Sources of Variance in Pretraining and Finetuning

Abstract

You have engaged in the very modern practice of transfer learning. You pretrained a model on a self-supervised objective, then you finetuned it on a downstream task, and you find excellent performance on the test set. ‘Aha’, you say. ‘I found a good pretraining procedure.’ Did you? You try finetuning again. The results are terrible! ‘Aha’, you say. ‘I found a bad finetuning procedure.’ Did you?

The random seeds for both pretraining and finetuning stages have a substantial influence on outcome. However, it is computationally expensive to pretrain new models, so measuring the robustness of a procedure across different seeds can be prohibitive. This talk will address, first, the influence that a pretraining seed has on both in-domain and OOD performance. Then we will address the role of the finetuning seed. Much variation in OOD generalization can be ascribed to where the finetuning seeds direct SGD trajectories. In particular, we discuss how to predict generalization behavior in a finetuned model, based on topographic properties of its region of the loss surface. By understanding the degree of influence that random seeds have on performance, we can fairly evaluate a robust training procedure, rather than a single set of parameters. By understanding the mechanism of that influence, we can go further by developing improved training methods.

Date
Jun 20, 2022 1:00 PM
Event
USC Information Sciences Institute
Location
Los Angeles, CA
Naomi Saphra
Naomi Saphra
Gradient Descent Spectator

Naomi Saphra is a researcher in NLP and machine learning.