Abstract
We introduce MoMa, a novel modality-aware mixture-of-experts (MoE) architecture designed for pre-training mixed-modal, early-fusion language models. MoMa processes images and text in arbitrary sequences by dividing expert modules into modality-specific groups. These groups exclusively process designated tokens while employing learned routing within each group to maintain semantically informed adaptivity. Our empirical results reveal substantial pre-training efficiency gains through this modality-specific parameter allocation. Under a 1-trillion-token training budget, the MoMa 1.4B model, featuring 4 text experts and 4 image experts, achieves impressive FLOPs savings: 3.7x overall, with 2.6x for text and 5.2x for image processing compared to a compute-equivalent dense baseline, measured by pre-training loss. This outperforms the standard expert-choice MoE with 8 mixed-modal experts, which achieves 3x overall FLOPs savings (3x for text, 2.8x for image). Combining MoMa with mixture-of-depths (MoD) further improves pre-training FLOPs savings to 4.2x overall (text: 3.4x, image: 5.3x), although this combination hurts performance in causal inference due to increased sensitivity to router accuracy. These results demonstrate MoMa's potential to significantly advance the efficiency of mixed-modal, early-fusion language model pre-training, paving the way for more resource-efficient and capable multimodal AI systems.
Abstract (translated)
我们提出了MoMa,一种新颖的模态感知专家(MoE)架构,专为预训练混合模态、早期融合语言模型而设计。MoMa通过将专家模块划分为模态特定的组来处理图像和文本中的任意序列。这些组仅在组内处理指定的标记,并使用学习到的路由在组内保持语义通知的适应性。我们的实证结果表明,通过这种模态特定的参数分配,预训练效率得到了显著提高。在一个10亿个token的训练预算下,MoMa 1.4B模型(具有4个文本专家和4个图像专家)实现了令人印象深刻的FLOPs节省:总体3.7x,文本2.6x,图像5.2x,与计算等效的密集基线相比。这超过了使用8个混合模态专家的标准专家选择MoE,它实现了3x的总体FLOPs节省(3x文本,2.8x图像)。将MoMa与深度混合(MoD)相结合进一步提高了预训练FLOPs节省至4.2x(文本:3.4x,图像:5.3x),尽管这种组合由于增加了路由准确性的敏感性而在因果推理上表现不佳。这些结果证明了MoMa在推动混合模态、早期融合语言模型预训练效率方面具有巨大潜力,为更节能和高效的跨模态人工智能系统铺平道路。
URL
https://arxiv.org/abs/2407.21770