Min-Max Optimization Made Simple: Approximating the Proximal Point Method via Contraction Maps
SOSA, 2023
In this paper we present a first-order method that admits near-optimal convergence rates for convex/concave min-max problems while requiring a significantly simpler analysis than standard methods as (EG) and (OGDA). Similarly to the seminal work of Nemirovski and the recent approach of Piliouras et al. in normal form games, our work is based on the fact that the update rule of the Proximal Point method (PP) can be approximated up to accuracy $\epsilon$ with only $O(\log 1/\epsilon)$ additional gradient-calls through the iterations of a contraction map. Then combining the analysis of (PP) method with an error-propagation analysis we establish that the resulting first order method, called Clairvoyant Extra Gradient, admits near-optimal time-average convergence for general domains and last-iterate convergence in the unconstrained case.