Abstract
Large Language Models (LLMs) are prone to mem- orizing training data, which poses serious privacy risks. Two of the most prominent concerns are training data extraction and Membership Inference Attacks (MIAs). Prior research has shown that these threats are interconnected: adversaries can extract training data from an LLM by querying the model to generate a large volume of text and subsequently applying MIAs to verify whether a particular data point was included in the training set. In this study, we integrate multiple MIA techniques into the data extraction pipeline to systematically benchmark their effectiveness. We then compare their performance in this integrated setting against results from conventional MIA bench- marks, allowing us to evaluate their practical utility in real-world extraction scenarios.
Abstract (translated)
大型语言模型(LLMs)容易记忆训练数据,这带来了严重的隐私风险。其中两个最突出的担忧是训练数据提取和成员推断攻击(MIAs)。先前的研究表明,这些威胁相互关联:对手可以通过向模型查询以生成大量文本,并随后应用MIAs来验证特定数据点是否包含在训练集中,从而从LLM中提取训练数据。在这项研究中,我们将多种MIA技术整合到数据提取管道中,系统地评估它们的有效性。然后,在这种集成设置下,我们将其性能与传统MIA基准测试结果进行比较,以评估其在实际提取场景中的实用价值。
URL
https://arxiv.org/abs/2512.13352