Abstract
Advancing towards generalist agents necessitates the concurrent processing of multiple tasks using a unified model, thereby underscoring the growing significance of simultaneous model training on multiple downstream tasks. A common issue in multi-task learning is the occurrence of gradient conflict, which leads to potential competition among different tasks during joint training. This competition often results in improvements in one task at the expense of deterioration in another. Although several optimization methods have been developed to address this issue by manipulating task gradients for better task balancing, they cannot decrease the incidence of gradient conflict. In this paper, we systematically investigate the occurrence of gradient conflict across different methods and propose a strategy to reduce such conflicts through sparse training (ST), wherein only a portion of the model's parameters are updated during training while keeping the rest unchanged. Our extensive experiments demonstrate that ST effectively mitigates conflicting gradients and leads to superior performance. Furthermore, ST can be easily integrated with gradient manipulation techniques, thus enhancing their effectiveness.
Abstract (translated)
向通用代理迈进需要使用统一的模型同时处理多项任务,从而突出了在多个下游任务上同步训练模型的重要性日益增加。多任务学习中常见的一个问题是在联合训练过程中出现梯度冲突,导致不同任务之间的潜在竞争。这种竞争往往会导致一个任务的进步以另一个任务的退步为代价。尽管已经开发出几种优化方法通过操纵任务梯度来实现更好的任务平衡,但它们不能减少梯度冲突的发生率。在本文中,我们系统地研究了不同类型的方法中的梯度冲突,并提出了一种通过稀疏训练(ST)策略来减少此类冲突的方法,在此过程中仅更新模型参数的一部分,而保持其余部分不变。我们的广泛实验表明,ST有效缓解了冲突的梯度并带来了更优的表现。此外,ST可以轻松与梯度操纵技术结合使用,从而增强其效果。
URL
https://arxiv.org/abs/2411.18615