Abstract
The rise of deep learning has marked significant progress in fields such as computer vision, natural language processing, and medical imaging, primarily through the adaptation of pre-trained models for specific tasks. Traditional fine-tuning methods, involving adjustments to all parameters, face challenges due to high computational and memory demands. This has led to the development of Parameter Efficient Fine-Tuning (PEFT) techniques, which selectively update parameters to balance computational efficiency with performance. This review examines PEFT approaches, offering a detailed comparison of various strategies highlighting applications across different domains, including text generation, medical imaging, protein modeling, and speech synthesis. By assessing the effectiveness of PEFT methods in reducing computational load, speeding up training, and lowering memory usage, this paper contributes to making deep learning more accessible and adaptable, facilitating its wider application and encouraging innovation in model optimization. Ultimately, the paper aims to contribute towards insights into PEFT's evolving landscape, guiding researchers and practitioners in overcoming the limitations of conventional fine-tuning approaches.
Abstract (translated)
深度学习的兴起在计算机视觉、自然语言处理和医学影像等领域标志着显著的进展,主要是通过为特定任务对预训练模型进行调整。然而,传统的微调方法在计算和内存需求较高的情况下遇到了挑战。这导致开发了参数高效的微调(PEFT)技术,这些技术选择性地更新参数以平衡计算效率和性能。本文回顾了PEFT方法,详细比较了各种策略,突出了在不同领域的应用,包括文本生成、医学影像、蛋白质建模和语音合成。通过评估PEFT方法在减轻计算负担、加速训练和降低内存使用方面的有效性,本文为深度学习更加便捷和适应性提供了一个论据,促进了其在各个领域的更广泛应用,并鼓励模型优化方面的创新。最终,本文旨在为PEFT的发展提供一个指导,引导研究人员和实践者克服传统微调方法的局限性。
URL
https://arxiv.org/abs/2404.13506