Abstract
To deal with heterogeneity resulting from label distribution skew and data scarcity in distributed machine learning scenarios, this paper proposes a novel Personalized Federated Learning (PFL) algorithm, named Federated Contrastive Representation Learning (FedCRL). FedCRL introduces contrastive representation learning (CRL) on shared representations to facilitate knowledge acquisition of clients. Specifically, both local model parameters and averaged values of local representations are considered as shareable information to the server, both of which are then aggregated globally. CRL is applied between local representations and global representations to regularize personalized training by drawing similar representations closer and separating dissimilar ones, thereby enhancing local models with external knowledge and avoiding being harmed by label distribution skew. Additionally, FedCRL adopts local aggregation between each local model and the global model to tackle data scarcity. A loss-wise weighting mechanism is introduced to guide the local aggregation using each local model's contrastive loss to coordinate the global model involvement in each client, thus helping clients with scarce data. Our simulations demonstrate FedCRL's effectiveness in mitigating label heterogeneity by achieving accuracy improvements over existing methods on datasets with varying degrees of label heterogeneity.
Abstract (translated)
在分布式机器学习场景中处理标签分布不均匀和数据稀缺问题,本文提出了一种名为Federated Contrastive Representation Learning(FedCRL)的新个性化联邦学习(PFL)算法。FedCRL通过在共享表示上进行对比性表示学习(CRL)来促进客户端知识获取。具体来说,将本地模型的参数和局部表示的平均值视为可共享的信息,然后在全球范围内进行聚合。CRL在本地表示和全局表示之间应用,通过将类似的代表性绘制成更接近,将不相似的代表性分离,从而通过外部知识增强本地模型,并避免因标签分布不均匀受到伤害。此外,FedCRL通过在本地模型和全局模型之间进行局部聚合来解决数据稀缺问题。引入了一种基于每个局部模型对比损失的局部聚合机制,以协调全局模型在每个客户端的参与程度,从而帮助缺乏数据的客户端。 我们对FedCRL在处理不同程度标签异质性的数据效果进行了仿真,结果表明,FedCRL通过实现对不同程度标签异质性数据的准确率提升,有效减轻了标签异质性带来的影响。
URL
https://arxiv.org/abs/2404.17916