← ML Research Wiki / 2401.06066

DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models

Damai Dai [email protected] National Key Laboratory for Multimedia Information Processing Peking University, Chengqi Deng, Chenggang Zhao Institute for Interdisciplinary Information Sciences Tsinghua University, R X Xu, Huazuo Gao, Deli Chen, Jiashi Li, Wangding Zeng, Xingkai Yu National Key Laboratory for Novel Software Technology Nanjing University, Y Wu, Zhenda Xie, Y K Li, Panpan Huang, Fuli Luo, Chong Ruan, Zhifang Sui National Key Laboratory for Multimedia Information Processing Peking University, Wenfeng Liang [email protected] (2024)

Paper Information
arXiv ID
Venue
Annual Meeting of the Association for Computational Linguistics
Domain
Artificial Intelligence, Deep Learning, NLP
SOTA Claim
Yes

Abstract

In the era of large language models, Mixture-of-Experts (MoE) is a promising architecture for managing computational costs when scaling up model parameters.However, conventional MoE architectures like GShard, which activate the top- out of experts, face challenges in ensuring expert specialization, i.e. each expert acquires non-overlapping and focused knowledge.In response, we propose the DeepSeekMoE architecture towards ultimate expert specialization.It involves two principal strategies: (1) finely segmenting the experts into ones and activating from them, allowing for a more flexible combination of activated experts; (2) isolating experts as shared ones, aiming at capturing common knowledge and mitigating redundancy in routed experts.Starting from a modest scale with 2B parameters, we demonstrate that DeepSeekMoE 2B achieves comparable performance with GShard 2.9B, which has 1.5× expert parameters and computation.In addition, DeepSeekMoE 2B nearly approaches the performance of its dense counterpart with the same number of total parameters, which set the upper bound of MoE models.Subsequently, we scale up DeepSeekMoE to 16B parameters and show that it achieves comparable performance with LLaMA2 7B, with only about 40% of computations.Further, our preliminary efforts to scale up DeepSeekMoE to 145B parameters consistently validate its substantial advantages over the GShard architecture, and show its performance comparable with DeepSeek 67B, using only 28.5% (maybe even 18.2%) of computations.

Summary

The paper presents DeepSeekMoE, a Mixture-of-Experts (MoE) architecture designed to enhance expert specialization in large language models. It introduces two key innovations: fine-grained expert segmentation for more effective knowledge distribution among experts, and shared expert isolation to reduce redundancy in trained parameters. Initial results show that a model with 2B parameters outperforms existing models like GShard with 2.9B parameters while using fewer computational resources. When scaled to 16B parameters, DeepSeekMoE achieves competitive performance relative to larger models like LLaMA2, with only about 40% of the computations. The architecture also maintains excellent performance scaling up to 145B parameters, demonstrating significant advantages over traditional MoE models.

Methods

This paper employs the following methods:

  • Mixture-of-Experts (MoE)
  • Fine-Grained Expert Segmentation
  • Shared Expert Isolation

Models Used

  • DeepSeekMoE
  • GShard
  • LLaMA2 7B
  • DeepSeek 67B

Datasets

The following datasets were used in this research:

  • Pile
  • HellaSwag
  • PIQA
  • ARC-challenge
  • ARC-easy
  • RACE-high
  • RACE-middle
  • HumanEval
  • MBPP
  • TriviaQA
  • NaturalQuestions
  • DROP
  • GSM8K
  • MATH
  • MMLU
  • CLUEWSC
  • CEval
  • CMMLU
  • CHID

Evaluation Metrics

  • Cross-Entropy Loss
  • Accuracy
  • Pass@1
  • Exactly Matching (EM)
  • Bits Per Byte (BPB)

Results

  • DeepSeekMoE 2B outperforms GShard 2B
  • DeepSeekMoE 16B achieves performance comparable to LLaMA2 7B with only 40% of computations
  • DeepSeekMoE 145B shows advantages over GShard architecture

Limitations

The authors identified the following limitations:

  • Not specified

Technical Requirements

  • Number of GPUs: None specified
  • GPU Type: NVIDIA A100 or H800

Keywords

DeepSeekMoE Expert Specialization Mixture-of-Experts Transformer Language Models Scaling Efficiency

Papers Using Similar Methods

External Resources