Paper Reading AI Learner

Lory: Fully Differentiable Mixture-of-Experts for Autoregressive Language Model Pre-training

2024-05-06 03:06:33
Zexuan Zhong, Mengzhou Xia, Danqi Chen, Mike Lewis

Abstract

Mixture-of-experts (MoE) models facilitate efficient scaling; however, training the router network introduces the challenge of optimizing a non-differentiable, discrete objective. Recently, a fully-differentiable MoE architecture, SMEAR, was proposed (Muqeeth et al., 2023), which softly merges experts in the parameter space; nevertheless, its effectiveness was only demonstrated in downstream fine-tuning on classification tasks. In this paper, we present Lory, the first approach that scales such architectures to autoregressive language model pre-training. Lory introduces two key techniques: (1) a causal segment routing strategy that achieves high efficiency for expert merging operations while preserving the autoregressive nature of language models; (2) a similarity-based data batching method that encourages expert specialization by grouping similar documents in training instances. We pre-train a series of Lory models on 150B tokens from scratch, with up to 32 experts and 30B (1.5B active) parameters. Experimental results show significant performance gains over parameter-matched dense models on both perplexity (+13.9%) and a variety of downstream tasks (+1.5%-11.1%). Despite segment-level routing, Lory models achieve competitive performance compared to state-of-the-art MoE models with token-level routing. We further demonstrate that the trained experts in Lory capture domain-level specialization without supervision. Our work highlights the potential of fully-differentiable MoE architectures for language model pre-training and advocates future research in this area.

Abstract (translated)

混合专家(MoE)模型有助于实现高效的扩展;然而,训练路由器网络引入了一个非可微分的离散优化目标。最近,一种全差分的MoE架构SMEAR被提出(Muqeeth等人,2023),它通过软地合并参数空间中的专家来简化问题;然而,只有在下游分类任务上的精调效果才是有效的。在本文中,我们提出了Lory,这是第一个将此类架构扩展到自回归语言模型预训练的第一种方法。Lory引入了两个关键技术:(1)一种因果段路由策略,在专家合并操作保持语言模型的自回归性质的同时实现高效;(2)一种基于相似度的数据批注方法,通过将训练实例中的相似文档分组,鼓励专家专门化。我们在从头预训练的150B个token上训练了一系列Lory模型,包括最多32个专家和30B(1.5B个活跃)参数。实验结果表明,参数匹配的密集模型在词向量对和各种下游任务上的性能均明显低于Lory模型。尽管存在段级别路由,Lory模型在token级别路由的 MoE 模型上仍具有竞争力的性能。我们进一步证明,在Lory中训练的专家可以捕捉领域级别专门化,而不需要监督。我们的工作突出了完全差分MoE架构在语言模型预训练中的潜力,并呼吁在这个领域进行未来的研究。

URL

https://arxiv.org/abs/2405.03133

PDF

https://arxiv.org/pdf/2405.03133.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot