Learning Polynomials in Few Relevant Dimensions
Sitan Chen, Raghu Meka
Subject areas: Regression, Convex optimization, High-dimensional statistics, Non-convex optimization
Presented in: Session 3A, Session 3C
[Zoom link for poster in Session 3A], [Zoom link for poster in Session 3C]
Abstract:
Polynomial regression is a basic primitive in learning and statistics. In its most basic form the goal is to fit a degree $d$ polynomial to a response variable $y$ in terms of an $n$-dimensional input vector $x$. This is extremely well-studied with many applications and has sample and runtime complexity $\Theta(n^d)$. Can one achieve better runtime if the intrinsic dimension of the data is much smaller than the ambient dimension $n$? \n\nConcretely, we are given samples $(x,y)$ where $y$ is a degree at most $d$ polynomial in an unknown $r$-dimensional projection (the relevant dimensions) of $x$. This can be seen both as a generalization of phase retrieval and as a special case of learning multi-index models where the link function is an unknown low-degree polynomial. Note that without distributional assumptions, this is at least as hard as junta learning.\n\nIn this work we consider the important case where the covariates are Gaussian. We give an algorithm that learns the polynomial within accuracy $\epsilon$ with sample complexity that is roughly $N = O_{r,d}(n \log^2(1/\epsilon) (\log n)^d)$ and runtime $O_{r,d}(N n^2)$. Prior to our work, no such results were known even for the case of $r=1$.\n\nWe introduce a new \emph{filtered PCA} approach to get a warm start for the true subspace and use \emph{geodesic SGD} to boost to arbitrary accuracy; our techniques may be of independent interest, especially for problems dealing with subspace recovery or analyzing SGD on manifolds.