Abstract
Recently, several approaches successfully demonstrated that weight-sharing Neural Architecture Search (NAS) can effectively explore a search space of elastic low-rank adapters (LoRA), allowing the parameter-efficient fine-tuning (PEFT) and compression of large language models. In this paper, we introduce a novel approach called Shears, demonstrating how the integration of cost-effective sparsity and a proposed Neural Low-rank adapter Search (NLS) algorithm can further improve the efficiency of PEFT approaches. Results demonstrate the benefits of Shears compared to other methods, reaching high sparsity levels while improving or with little drop in accuracy, utilizing a single GPU for a pair of hours.
Abstract (translated)
近年来,几种方法成功地证明了权重共享神经架构搜索(NAS)可以有效探索弹性低秩适应器(LoRA)的搜索空间,使得参数高效的微调(PEFT)和大语言模型的压缩。在本文中,我们引入了一种名为Shears的新方法,证明了将经济效益的稀疏性和所提出的神经低秩适应器搜索(NLS)算法相结合可以进一步提高PEFT方法的效率。结果表明,Shears相对于其他方法具有明显的优势,可以在高稀疏水平上提高精度,或者在准确性略有下降的情况下提高效率,同时使用一个GPU进行两个小时的任务。
URL
https://arxiv.org/abs/2404.10934