Host: Andre Wibisono
Title: What, When, and How can we Learn Adversarially Robustly?
Despite extraordinary progress, current machine learning systems have been shown to be brittle against adversarial examples: seemingly innocuous but carefully crafted perturbations of test examples that cause machine learning predictors to misclassify. Can we learn predictors robust to adversarial examples? and how? There has been much empirical interest in this major challenge in machine learning, and in this talk, we will present a theoretical perspective. We will illustrate the need to go beyond traditional approaches and principles, such as empirical (robust) risk minimization, and present new algorithmic ideas with stronger robust learning guarantees.
Omar Montasser is a PhD candidate at TTI-Chicago advised by Nathan Srebro. His research broadly explores the theory and foundations of machine learning. Recently, his research has focused on understanding and characterizing adversarially robust learning, and on designing learning algorithms with provable robustness guarantees under different settings. His work has been recognized by a best student paper award at COLT (2019).