Last Iterate is Slower than Averaged Iterate in Smooth Convex-Concave Saddle Point Problems
Noah Golowich, Sarath Pattathil, Constantinos Daskalakis, Asuman Ozdaglar
Subject areas: Convex optimization, Economics, game theory, and incentives, Non-convex optimization
Presented in: Session 3B, Session 3D
[Zoom link for poster in Session 3B], [Zoom link for poster in Session 3D]
Abstract:
In this paper we study the smooth convex-concave saddle point problem. Specifically, we analyze the last iterate convergence properties of the Extragradient (EG) algorithm. It is well known that the ergodic (averaged) iterates of EG converge at a rate of O(1/T) (Nemirovski, 2004). In this paper, we show that the last iterate of EG converges at a rate of O(1/√T). To the best of our knowledge, this is the first paper to provide a convergence rate guarantee for the last iterate of EG for the smooth convex-concave saddle point problem. Moreover, we show that this rate is tight by proving a lower bound of Omega(1/√T) for the last iterate. This lower bound therefore shows a quadratic separation of the convergence rates of ergodic and last iterates in smooth convex-concave saddle point problems.