The Sample Complexity of Robust Covariance Testing
Ilias Diakonikolas , Daniel M Kane
Session: Minimally Supervised Learning (B)
Session Chair: Aryeh Kontorovich
Poster: Poster Session 1
Abstract:
We study the problem of testing the covariance matrix of a high-dimensional Gaussian
in a robust setting, where the input distribution has been corrupted in Huber's contamination model.
Specifically, we are given i.i.d. samples from a distribution of the form $Z = (1-\eps) X + \eps B$,
where $X$ is a zero-mean and unknown covariance Gaussian $\mathcal{N}(0, \Sigma)$,
$B$ is a fixed but unknown noise distribution, and $\eps>0$ is an arbitrarily small constant representing
the proportion of contamination.
We want to distinguish between the cases that $\Sigma$ is the identity matrix versus $\gamma$-far from the identity
in Frobenius norm.
In the absence of contamination, prior work gave a simple tester
for this hypothesis testing task that uses $O(d)$ samples. Moreover, this sample upper bound was shown
to be best possible, within constant factors. Our main result is that the sample complexity of covariance testing
dramatically increases in the contaminated setting. In particular, we prove a sample complexity lower bound
of $\Omega(d^2)$ for $\eps$ an arbitrarily small constant and $\gamma = 1/2$.
This lower bound is best possible, as $O(d^2)$ samples suffice to even robustly {\em learn}
the covariance. The conceptual implication of our result is that, for the natural setting we consider,
robust hypothesis testing is at least as hard as robust estimation.