Two steps at a time -- taking GAN training in stride with Tseng's method

Author(s)
Axel Böhm, Michael Sedlmayer, Ernö Robert Csetnek, Radu Ioan Bot
Abstract

Motivated by the training of Generative Adversarial Networks (GANs), we study methods for solving minimax problems with additional nonsmooth regularizers. We do so by employing \emph{monotone operator} theory, in particular the \emph{Forward-Backward-Forward (FBF)} method, which avoids the known issue of limit cycling by correcting each update by a second gradient evaluation. Furthermore, we propose a seemingly new scheme which recycles old gradients to mitigate the additional computational cost. In doing so we rediscover a known method, related to \emph{Optimistic Gradient Descent Ascent (OGDA)}. For both schemes we prove novel convergence rates for convex-concave minimax problems via a unifying approach. The derived error bounds are in terms of the gap function for the ergodic iterates. For the deterministic and the stochastic problem we show a convergence rate of $\mathcal{O}(1/k)$ and $\mathcal{O}(1/\sqrt{k})$, respectively. We complement our theoretical results with empirical improvements in the training of Wasserstein GANs on the CIFAR10 dataset.

Organisation(s)
Department of Mathematics, Research Network Data Science
Journal
SIAM Journal on Mathematics of Data Science
Volume
4
Pages
750-771
Publication date
2022
Peer reviewed
Yes
Austrian Fields of Science 2012
101016 Optimisation, 102019 Machine learning
Portal url
https://ucris.univie.ac.at/portal/en/publications/two-steps-at-a-time--taking-gan-training-in-stride-with-tsengs-method(b90e3cf6-3c28-4ac5-b744-a724e5c69868).html