Abstract
Differentially private stochastic gradient descent privatizes model training by injecting noise into each iteration, where the noise magnitude increases with the number of model parameters. Recent works suggest that we can reduce the noise by leveraging public data for private machine learning, by projecting gradients onto a subspace prescribed by the public data. However, given a choice of public datasets, it is not a priori clear which one may be most appropriate for the private task. We give an algorithm for selecting a public dataset by measuring a low-dimensional subspace distance between gradients of the public and private examples. We provide theoretical analysis demonstrating that the excess risk scales with this subspace distance. This distance is easy to compute and robust to modifications in the setting. Empirical evaluation shows that trained model accuracy is monotone in this distance.
Abstract (translated)
非对称加密随机梯度下降将模型训练私有化,通过在每个迭代中注入噪声来实现,其中噪声强度随着模型参数的数量增加而增加。最近的研究表明,我们可以利用公共数据进行私有机器学习,通过将梯度投影到公共数据指定的高度维空间中,从而减少噪声。然而,给定公共数据集的选择,并非一开始就明确哪种公共数据集最适合私人任务。我们提供了一种算法,通过测量公共和私有示例梯度之间的低维度子空间距离来选择公共数据集。我们提供了理论分析,证明超风险与该子空间距离成正比。这个距离易于计算,并且对设置中的更改具有鲁棒性。经验评估表明,训练模型的准确性在这个距离中呈单调递增趋势。
URL
https://arxiv.org/abs/2303.01256