A Mixture of Experts (MoE) is a neural network architecture that increases model capacity by combining many specialized sub-networks (“experts”) and using a gating (routing) mechanism to activate only a small subset of experts per input token, reducing compute compared to activating the full model.
What is Mixture of Experts (MoE)?
In a standard dense Transformer, every token passes through the same feed-forward layers, so compute grows linearly with parameter count. MoE replaces (some) dense feed-forward blocks with an expert layer: multiple expert MLPs exist in parallel, and a learned router assigns each token to the top-k experts (often k=1 or k=2). Only the selected experts run, and their outputs are combined (weighted sum or averaged). This creates a model with very large total parameters (high capacity) while keeping per-token FLOPs closer to a smaller dense model (similar “active parameters”). MoE systems also add balancing losses or routing constraints so that traffic spreads across experts and no single expert becomes a bottleneck.
Where MoE is used and why it matters
MoE is used in large-scale language models to improve quality at a given inference cost, or to reduce cost at a given quality. It is particularly attractive when training very large models where dense scaling becomes prohibitively expensive. MoE also enables specialization: experts can implicitly focus on different domains, languages, or styles. Operationally, MoE introduces new serving and training considerations—routing stability, expert parallelism across GPUs, and avoiding “hot” experts that overload specific devices.
Types
- Sparse MoE: only top-k experts activated per token (the common approach).
- Token-level vs. sequence-level routing: route each token independently or route an entire sequence to experts.
- Expert parallelism: distribute experts across devices to scale capacity.
FAQs
Do MoE models run faster than dense models?
Often they can, for similar quality, because only a few experts are active per token. However, routing and cross-device communication can reduce real-world gains.
What is “expert load balancing”?
It’s a training objective or constraint that encourages the router to distribute tokens across experts so compute and memory usage remain stable.
Are MoE models harder to deploy?
Yes. They require careful parallelization and monitoring to prevent bottlenecks from uneven routing or hardware placement.