Calibrated Surrogate Losses for Adversarially Robust Classification
Han Bao, Clayton Scott, Masashi Sugiyama
Subject areas: Loss functions, Adversarial learning and robustness, Classification, Excess risk bounds and generalization error bounds, Supervised learning
Presented in: Session 1E, Session 2C
[Zoom link for poster in Session 1E], [Zoom link for poster in Session 2C]
Abstract:
Adversarially robust classification seeks a classifier that is insensitive to adversarial perturbations of test patterns. This problem is often formulated via a minimax objective, where the target loss is the worst-case value of the 0-1 loss subject to a bound on the size of perturbation. Recent work has proposed convex surrogates for the adversarial 0-1 loss, in an effort to make optimization more tractable. In this work, we consider the question of which surrogate losses are calibrated with respect to the adversarial 0-1 loss, meaning that minimization of the former implies minimization of the latter. We show that no convex surrogate loss is calibrated with respect to the adversarial 0-1 loss when restricted to the class of linear models. We further introduce a class of nonconvex losses and offer necessary and sufficient conditions for losses in this class to be calibrated.