Accelerating Federated Learning Via Sequential Training of Grouped Heterogeneous Clients
IEEE ACCESS(2024)
摘要
Federated Learning (FL) allows training machine learning models in privacy-constrained scenarios by enabling the cooperation of edge devices without requiring local data sharing. This approach raises several challenges due to the different statistical distribution of the local datasets and the clients’ computational heterogeneity. In particular, the presence of highly non-i.i.d. data severely impairs both the performance of the trained neural network and its convergence rate, increasing the number of communication rounds required to reach centralized performance. As a solution, we propose FedSeq , a novel framework leveraging the sequential training of subgroups of heterogeneous clients, i.e ., superclients , to learn more robust models before the server-side averaging step. Given a fixed budget of communication rounds, we show that FedSeq outperforms or match several state-of-the-art federated algorithms in terms of final performance and speed of convergence. Our method can be easily integrated with other approaches available in the literature, and empirical results show that combining existing algorithms with FedSeq further improves its final performance and convergence speed. We evaluate our method across multiple FL benchmarks, establishing its effectiveness in both i.i.d. and non-i.i.d. scenarios. Lastly, we highlight that the sequential training introduced here does not introduce additional privacy concerns when compared to the de facto standard, FedAvg.
更多查看译文
关键词
Federated learning,distributed learning,privacy-preserving machine learning,statistical heterogeneity,deep learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn