A Greedy Anytime Algorithm for Sparse PCA
Dan Vilenchik, Adam Soffer, Guy Holtzman
Subject areas: Non-convex optimization, Combinatorial optimization, Computational complexity, High-dimensional statistics, Unsupervised and semi-supervised learning
Presented in: Session 1A, Session 1E
[Zoom link for poster in Session 1A], [Zoom link for poster in Session 1E]
Abstract:
The taxing computational effort that is involved in solving some high-dimensional statistical problems, in particular problems involving non-convex optimization, has popularized the development and analysis of algorithms that run efficiently (polynomial-time) but with no general guarantee on statistical consistency. In light of the ever-increasing compute power and decreasing costs, a more useful characterization of algorithms is by their ability to calibrate the invested computational effort with various characteristics of the input at hand and with the available computational resources. We propose a new greedy algorithm for the $\ell_0$-sparse PCA problem which supports the calibration principle. We provide both a rigorous analysis of our algorithm in the spiked covariance model, as well as simulation results and comparison with other existing methods. Our findings show that our algorithm recovers the spike in SNR regimes where all polynomial-time algorithms fail while running in a reasonable parallel-time on a cluster.