Abstract
Despite remarkable empirical success, the training dynamics of generative adversarial networks (GAN), which involves solving a minimax game using stochastic gradients, is still poorly understood. In this work, we analyze last-iterate convergence of simultaneous gradient descent (simGD) and its variants under the assumption of convex-concavity, guided by a continuous-time analysis with differential equations. First, we show that simGD, as is, converges with stochastic sub-gradients under strict convexity in the primal variable. Second, we generalize optimistic simGD to accommodate an optimism rate separate from the learning rate and show its convergence with full gradients. Finally, we present anchored simGD, a new method, and show convergence with stochastic subgradients.
Abstract (translated)
尽管在经验上取得了显著的成功,但对于生成性对抗网络(gan)的训练动力学(gan)仍然知之甚少,它涉及使用随机梯度求解极大极小博弈。本文以微分方程连续时间分析为指导,在凸凹度假设下,分析了同时梯度下降(simgd)及其变化的最后一次迭代收敛性。首先,我们证明了simgd在原始变量的严格凸性下与随机次梯度收敛。其次,我们将乐观simgd推广到一个与学习率分离的乐观率,并显示其在全梯度下的收敛性。最后,我们提出了一种新的锚定simgd方法,并给出了随机子梯度的收敛性。
URL
https://arxiv.org/abs/1905.10899