YINS Distinguished Lecturer Seminar - Peter Bartlett, Univ. of California at Berkeley

Event time: 
Wednesday, December 2, 2020 - 12:00pm
Location: 
Zoom Presentation See map
Event description: 

YINS Distinguished Lecturer Seminar

Speaker: Peter Bartlett
Professor of Computer Science and Statistics, University of California at Berkeley
Associate Director of the Simons Institute for the Theory of Computing
Director of the Foundations of Data Science Institute
Director of the Collaboration on the Theoretical Foundations of Deep Learning

Title: “Benign Overfitting”

Zoom Link: https://yale.zoom.us/j/97135219127

Abstract: 

Classical theory that guides the design of nonparametric prediction methods like deep neural networks involves a tradeoff between the fit to the training data and the complexity of the prediction rule. Deep learning seems to operate outside the regime where these results are informative, since deep networks can perform well even with a perfect fit to noisy training data. We investigate this phenomenon of ‘benign overfitting’ in the simplest setting, that of linear prediction. We give a characterization of linear regression problems for which the minimum norm interpolating prediction rule has near-optimal prediction accuracy. The characterization is in terms of two notions of effective rank of the data covariance. It shows that overparameterization is essential: the number of directions in parameter space that are unimportant for prediction must significantly exceed the sample size.  It also shows an important role for finite-dimensional data: benign overfitting occurs for a much narrower range of properties of the data distribution when the data lies in an infinite dimensional space versus when it lies in a finite dimensional space whose dimension grows faster than the sample size. We discuss implications for deep networks, for robustness to adversarial examples, and for the rich variety of possible behaviors of excess risk as a function of dimension, and we describe extensions to ridge regression and barriers to analyzing benign overfitting based on model-dependent generalization bounds.  Joint work with Phil Long, Gábor Lugosi, and Alex Tsigler. 

Bio: 

Peter Bartlett is professor of Computer Science and Statistics at the University of California at Berkeley, Associate Director of the Simons Institute for the Theory of Computing, Director of the Foundations of Data Science Institute, and Director of the Collaboration on the Theoretical Foundations of Deep Learning. His research interests include machine learning and statistical learning theory, and he is the co-author of the book Neural Network Learning: Theoretical Foundations. He has been Institute of Mathematical Statistics Medallion Lecturer, winner of the Malcolm McIntosh Prize for Physical Scientist of the Year, and Australian Laureate Fellow, and he is a Fellow of the IMS, Fellow of the ACM, and Fellow of the Australian Academy of Science.

yins.yale.edu/event/yins-distinguished-lecturer-seminar-peter-bartlett-uc-berkeley