Abstract
We study privacy leakage in the reasoning traces of large reasoning models used as personal agents. Unlike final outputs, reasoning traces are often assumed to be internal and safe. We challenge this assumption by showing that reasoning traces frequently contain sensitive user data, which can be extracted via prompt injections or accidentally leak into outputs. Through probing and agentic evaluations, we demonstrate that test-time compute approaches, particularly increased reasoning steps, amplify such leakage. While increasing the budget of those test-time compute approaches makes models more cautious in their final answers, it also leads them to reason more verbosely and leak more in their own thinking. This reveals a core tension: reasoning improves utility but enlarges the privacy attack surface. We argue that safety efforts must extend to the model's internal thinking, not just its outputs.
Abstract (translated)
我们研究了大型推理模型在用作个人代理时,其推理痕迹中隐私泄露的情况。与最终输出不同,推理痕迹通常被认为属于内部信息且较为安全。我们挑战这一假设,通过展示推理痕迹经常包含敏感的用户数据,并可以通过提示注入或意外泄漏到输出来证明这一点。通过探测和代理评估,我们展示了测试时间计算方法(尤其是增加推理步骤)会放大此类泄露现象。虽然扩大这些测试时间计算方法的预算可以让模型在最终答案上更加谨慎,但这也使它们在自己的思考中变得更加冗长,并且更多地泄露信息。这揭示了一个核心矛盾:推理改善了实用性,但也扩大了隐私攻击面。我们认为安全措施必须扩展到模型的内部思维过程,而不仅仅是其输出。
URL
https://arxiv.org/abs/2506.15674