Reasoning About Generalization via Conditional Mutual Information
Thomas Steinke, Lydia Zakynthinou
Subject areas: Information theory, Adaptive data analysis, Excess risk bounds and generalization error bounds
Presented in: Session 4A, Session 4C
[Zoom link for poster in Session 4A], [Zoom link for poster in Session 4C]
Abstract:
We provide an information-theoretic framework for studying the generalization properties of machine learning algorithms. Our framework ties together existing approaches, including uniform convergence bounds and recent methods for adaptive data analysis. \nSpecifically, we use Conditional Mutual Information (CMI) to quantify how well the input (i.e., the training data) can be recognized given the output (i.e., the trained model) of the algorithm. We show that bounds on CMI can be obtained from VC dimension, compression schemes, differential privacy, and other methods. We then show that bounded CMI implies various forms of generalization.