Abstract
Large Language Models (LLMs) have demonstrated significant potential in performing multiple tasks in multimedia applications, ranging from content generation to interactive entertainment, and artistic creation. However, the diversity of downstream tasks in multitask scenarios presents substantial adaptation challenges for LLMs. While traditional methods often succumb to knowledge confusion on their monolithic dense models, Mixture-of-Experts (MoE) has been emerged as a promising solution with its sparse architecture for effective task decoupling. Inspired by the principles of human cognitive neuroscience, we design a novel framework \texttt{Intuition-MoR1E} that leverages the inherent semantic clustering of instances to mimic the human brain to deal with multitask, offering implicit guidance to router for optimized feature allocation. Moreover, we introduce cutting-edge Rank-1 Experts formulation designed to manage a spectrum of intuitions, demonstrating enhanced parameter efficiency and effectiveness in multitask LLM finetuning. Extensive experiments demonstrate that Intuition-MoR1E achieves superior efficiency and 2.15\% overall accuracy improvement across 14 public datasets against other state-of-the-art baselines.
Abstract (translated)
大语言模型(LLMs)在多媒体应用中执行多个任务方面表现出显著潜力,从内容生成到交互娱乐和艺术创作。然而,在多任务场景中下游任务的多样性为LLMs带来了巨大的适应挑战。虽然传统方法在单体密集模型上往往陷入知识混淆,但是混合专家(MoE)已经被证明是一个有前景的解决方案,其稀疏架构有助于任务解耦,提高模型的性能。 受到人脑认知科学原理的启发,我们设计了一个名为《直觉-MoR1E》的新框架,它利用实例固有的语义聚类来模拟人类大脑处理多任务,为路由器提供最优特征分配的隐含指导。此外,我们还引入了用于管理直觉的先进 Rank-1 专家公式的创新方法,展示了在多任务LLM微调中提高参数效率和有效性的能力。 广泛的实验证明,《直觉-MoR1E》在14个公共数据集上的效率和整体准确性比其他最先进的基准方法提高了2.15%。
URL
https://arxiv.org/abs/2404.08985