Scattered Mixture-of-Experts Implementation
CoRR(2024)
摘要
We present ScatterMoE, an implementation of Sparse Mixture-of-Experts (SMoE)
on GPUs. ScatterMoE builds upon existing implementations, and overcoming some
of the limitations to improve inference and training speed, and memory
footprint. This implementation achieves this by avoiding padding and making
excessive copies of the input.
We introduce ParallelLinear, the main component we use to build our
implementation and the various kernels used to speed up the operation. We
benchmark our implementation against Megablocks, and show that it enables a
higher throughput and lower memory footprint. We also show how ParallelLinear
enables extension of the Mixture-of-Experts concept by demonstrating with an
implementation of Mixture of Attention.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn