Abstract
Recent deep learning models such as ChatGPT utilizing the back-propagation algorithm have exhibited remarkable performance. However, the disparity between the biological brain processes and the back-propagation algorithm has been noted. The Forward-Forward algorithm, which trains deep learning models solely through the forward pass, has emerged to address this. Although the Forward-Forward algorithm cannot replace back-propagation due to limitations such as having to use special input and loss functions, it has the potential to be useful in special situations where back-propagation is difficult to use. To work around this limitation and verify usability, we propose an Unsupervised Forward-Forward algorithm. Using an unsupervised learning model enables training with usual loss functions and inputs without restriction. Through this approach, we lead to stable learning and enable versatile utilization across various datasets and tasks. From a usability perspective, given the characteristics of the Forward-Forward algorithm and the advantages of the proposed method, we anticipate its practical application even in scenarios such as federated learning, where deep learning layers need to be trained separately in physically distributed environments.
Abstract (translated)
近年来,利用反向传播算法训练的深度学习模型(如ChatGPT)表现出非凡的性能。然而,生物大脑过程和反向传播算法之间的差异已引起注意。通过训练深度学习模型仅通过前向传播,前向-前向算法(FFA)应运而生。尽管FFA无法取代反向传播,但由于必须使用特殊的输入和损失函数等限制,但它有望在某些情况下替代反向传播。为了克服这一限制,并验证其可用性,我们提出了一个无监督的前向-前向算法。使用无监督学习模型进行训练,无需限制。通过这种方法,我们实现了稳定学习和各种数据集和任务的可变利用。从可用性角度来看,由于前向-前向算法的特点和所提出方法的优点,我们预计在类似联邦学习场景中,FFA具有实际应用价值,这些场景中需要在分布式物理环境中分别训练深度学习层。
URL
https://arxiv.org/abs/2404.14664