Abstract
Recent works have shown that line search methods greatly increase performance of traditional stochastic gradient descent methods on a variety of datasets and architectures [1], [2]. In this work we succeed in extending line search methods to the novel and highly popular Transformer architecture and dataset domains in natural language processing. More specifically, we combine the Armijo line search with the Adam optimizer and extend it by subdividing the networks architecture into sensible units and perform the line search separately on these local units. Our optimization method outperforms the traditional Adam optimizer and achieves significant performance improvements for small data sets or small training budgets, while performing equal or better for other tested cases. Our work is publicly available as a python package, which provides a hyperparameter-free pytorch optimizer that is compatible with arbitrary network architectures.
Abstract (translated)
近年来,研究表明,行搜索方法在各种数据集和架构上大大提高了传统随机梯度下降算法的性能[1],[2]。在本文中,我们在自然语言处理领域成功将行搜索方法扩展到了新颖且高度流行的Transformer架构和数据领域。具体来说,我们将Armijo行搜索与Adam优化器相结合,并将其扩展到这些局部单元上,并对这些局部单元分别进行行搜索。我们的优化方法在小型数据集或小型训练预算上表现出优异的性能,而在其他测试用例上则表现出与传统Adam优化器相当或更好的性能。我们的工作已经作为Python软件包公开发布,该软件包提供了一个与任意网络架构兼容的、无需超参数的PyTorch优化器。
URL
https://arxiv.org/abs/2403.18506