Abstract
Split Federated Learning (SFL) is a distributed machine learning framework which strategically divides the learning process between a server and clients and collaboratively trains a shared model by aggregating local models updated based on data from distributed clients. However, data heterogeneity and partial client participation result in label distribution skew, which severely degrades the learning performance. To address this issue, we propose SFL with Concatenated Activations and Logit Adjustments (SCALA). Specifically, the activations from the client-side models are concatenated as the input of the server-side model so as to centrally adjust label distribution across different clients, and logit adjustments of loss functions on both server-side and client-side models are performed to deal with the label distribution variation across different subsets of participating clients. Theoretical analysis and experimental results verify the superiority of the proposed SCALA on public datasets.
Abstract (translated)
Split Federated Learning (SFL)是一种分布式机器学习框架,它将学习过程战略性地划分为服务器和客户端,并通过聚合基于分布式客户端的数据来共同训练一个共享模型。然而,数据异质性和部分客户端参与导致标签分布偏斜,这严重地削弱了学习效果。为解决这个问题,我们提出了SCALA。具体来说,客户端模型的激活被连接在一起作为服务器的输入,以便在不同的客户端上集中调整标签分布,同时对服务器和客户端模型的损失函数进行对数调整,以处理不同客户端子集中的标签分布变化。理论分析和实验结果证实了所提出的SCALA在公共数据集上的优越性。
URL
https://arxiv.org/abs/2405.04875