Abstract
Latest gaze estimation methods require large-scale training data but their collection and exchange pose significant privacy risks. We propose PrivatEyes - the first privacy-enhancing training approach for appearance-based gaze estimation based on federated learning (FL) and secure multi-party computation (MPC). PrivatEyes enables training gaze estimators on multiple local datasets across different users and server-based secure aggregation of the individual estimators' updates. PrivatEyes guarantees that individual gaze data remains private even if a majority of the aggregating servers is malicious. We also introduce a new data leakage attack DualView that shows that PrivatEyes limits the leakage of private training data more effectively than previous approaches. Evaluations on the MPIIGaze, MPIIFaceGaze, GazeCapture, and NVGaze datasets further show that the improved privacy does not lead to a lower gaze estimation accuracy or substantially higher computational costs - both of which are on par with its non-secure counterparts.
Abstract (translated)
最新的目光估计方法需要大规模训练数据,但它们的收集和交换却存在着显著的隐私风险。我们提出PrivatEyes - 基于联邦学习和安全多方计算(MPC)的第一个隐私增强训练方法,用于基于外观的目光估计。PrivatEyes使多个局部数据集上的训练目光估计算法能够在不同的用户和服务器上进行训练,并对个人估计算法的更新进行安全聚合。PrivatEyes保证,即使大多数聚合服务器都是恶意的,个人目光数据也不会泄漏。我们还引入了一种新的数据泄露攻击DualView,证明了PrivatEyes比其他方法更有效地限制了训练数据的泄露。在MPIIGaze、MPIIFaceGaze、GazeCapture和NVGaze数据集上的评估进一步表明,提高隐私不会导致目光估计精度降低,或者导致计算成本大幅上升——这两者与非安全对照物的水平相当。
URL
https://arxiv.org/abs/2402.18970