Nearly Non-Expansive Bounds for Mahalanobis Hard Thresholding
Xiaotong Yuan, Ping Li
Subject areas: High-dimensional statistics, Learning with algebraic or combinatorial structure, Non-convex optimization
Presented in: Session 1E, Session 2C
[Zoom link for poster in Session 1E], [Zoom link for poster in Session 2C]
Abstract:
Given a vector $w \in \mathbb{R}^p$ and a positive semi-definite matrix $A \in \mathbb{R}^{p\times p}$, we study the expansion ratio bound for the following defined Mahalanobis hard thresholding operator of $w$:\n\[\n\mathcal{H}_{A,k}(w):=\argmin_{\|\theta\|_0\le k} \frac{1}{2}\|\theta - w\|^2_A,\n\]\nwhere $k\le p$ is the desired sparsity level. The core contribution of this paper is to prove that for any $\bar k$-sparse vector $\bar w$ with $\bar k < k$, the estimation error $\|\mathcal{H}_{A,k}(w) - \bar w\|_A$ satisfies\n\[\n\|\mathcal{H}_{A,k}(w) - \bar w\|^2_A \le \left(1+ \mathcal{O}\left(\kappa(A,2k) \sqrt{\frac{\bar k }{k - \bar k}}\right)\right) \| w - \bar w\|^2_A,\n\]\nwhere $\kappa(A,2k)$ is the restricted strong condition number of $A$ over $(2k)$-sparse subspace. This estimation error bound is nearly non-expansive when $k$ is sufficiently larger than $\bar k$. Specially when $A$ is the identity matrix such that $\kappa(A,2k)\equiv1$, our bound recovers the previously known nearly non-expansive bounds for Euclidean hard thresholding operator. We further show that such a bound extends to an approximate version of $\mathcal{H}_{A,k}(w)$ estimated by Hard Thresholding Pursuit (HTP) algorithm. We demonstrate the applicability of these bounds to the mean squared error analysis of HTP and its novel extension based on preconditioning method. Numerical evidence is provided to support our theory and demonstrate the superiority of the proposed preconditioning HTP algorithm.