Fri, January 3, 11:00 AM
45 MINUTES
Robustness to Adversarial Perturbations in Learning from Incomplete Data

What is the role of unlabeled data in an inference problem, when the presumed underlying distribution is adversarially perturbed? In this talk, I explain how we answer to this question by unifying two major learning frame-works: Semi-Supervised Learning (SSL) and Distributionally Robust Optimization (DRO). We develop a generalization theory for our framework based on a number of novel complexity measures, such as an adversarial extension of Rademacher complexity and its semi-supervised analogue. Moreover, our analysis is able to quantify the role of unlabeled data in the generalization process under a more general condition compared to existing works in SSL. Based on our framework, we also present a hybrid of DRL and EM algorithms that has a guaranteed convergence rate. When implemented with deep neural networks, our method shows a comparable per-formance to those of the state-of-the-art on a number of real-world benchmark datasets.

Amir Najafi

Assistant Professor, Computer Engineering Dept. at Sharif University of Technology

Amir received his B.Sc. and M.Sc. degrees in Electrical Engineering from Sharif University of Technology, Tehran, Iran, in 2012 and 2015, respectively. He is currently a Ph.D. student at Computer Engineering Dept. of Sharif University of Technology. He was with the Broad Institute of MIT and Harvard, Boston, MA, in 2016 as a visiting research scholar, and interned at Preferred Networks In Tokyo, Japan, in 2018. His research interests include machine learning theory, information theory and bioinformatics.