Abstract
We study gradient flow on the exponential loss for a classification problem with a one-layer softmax attention model, where the key and query weight matrices are trained separately. Under a separability assumption on the data, we show that when gradient flow achieves the minimal loss value, it further implicitly minimizes the nuclear norm of the product of the key and query weight matrices. Such implicit regularization can be described by a Support Vector Machine (SVM) problem with respect to the attention weights. This finding contrasts with prior results showing that the gradient descent induces an implicit regularization on the Frobenius norm on the product weight matrix when the key and query matrices are combined into a single weight matrix for training. For diagonal key and query matrices, our analysis builds upon the reparameterization technique and exploits approximate KKT conditions of the SVM associated with the classification data. Moreover, the results are extended to general weights configurations given proper alignment of the weight matrices' singular spaces with the data features at initialization.
Abstract (translated)
我们研究的是在具有单层软max注意力的分类问题中,梯度在指数损失上的传播。在这种假设数据上,我们证明了当梯度达到最小损失值时,它进一步隐含地最小化了键和查询权重矩阵的乘积核范数。这种隐式正则化可以描述为与注意力权重相关的支持向量机(SVM)问题。这一发现与之前的结果相反,后者表明在将键和查询矩阵组合成一个权重矩阵进行训练时,梯度下降会在乘积权重矩阵上诱导隐式正则化。对于对称的键和查询矩阵,我们的分析基于同余变换技术和与分类数据相关的SVM的近KKT条件。此外,结果还扩展到给定合适的数据特征与初始化时权重矩阵的向量空间对齐的情况。
URL
https://arxiv.org/abs/2403.08699