Logsmooth Gradient Concentration and Tighter Runtimes for Metropolized Hamiltonian Monte Carlo
Yin Tat Lee, Ruoqi Shen, Kevin Tian
Subject areas: Sampling algorithms, Bayesian methods
Presented in: Session 3B, Session 3D
[Zoom link for poster in Session 3B], [Zoom link for poster in Session 3D]
Abstract:
We show that the gradient norm $\norm{\nabla f(x)}$ for $x \sim \exp(-f(x))$, where $f$ is strongly convex and smooth, concentrates tightly around its mean. This removes a barrier in the prior state-of-the-art analysis for the well-studied Metropolized Hamiltonian Monte Carlo (HMC) algorithm for sampling from a strongly logconcave distribution \cite{DwivediCWY18}. We correspondingly demonstrate that Metropolized HMC mixes in $\tOh{\kappa d}$ iterations\footnote{We use $\tilde{O}$ to hide logarithmic factors in problem parameters.}, improving upon the $\tilde{O}(\kappa^{1.5}\sqrt{d}+ \kappa d)$ runtime of \cite{DwivediCWY18, ChenDWY19} by a factor $(\kappa/d)^{1/2}$ when the condition number $\kappa$ is large. Our mixing time analysis introduces several techniques which to our knowledge have not appeared in the literature and may be of independent interest, including restrictions to a nonconvex set with good conductance behavior, and a new reduction technique for boosting a constant-accuracy total variation guarantee under weak warmness assumptions. This is the first mixing time result for logconcave distributions using only first-order function information which achieves linear dependence on $\kappa$; we also give evidence that this dependence is likely to be necessary for standard Metropolized first-order methods.