Abstract
Writing effective rebuttals is a high-stakes task that demands more than linguistic fluency, as it requires precise alignment between reviewer intent and manuscript details. Current solutions typically treat this as a direct-to-text generation problem, suffering from hallucination, overlooked critiques, and a lack of verifiable grounding. To address these limitations, we introduce $\textbf{RebuttalAgent}$, the first multi-agents framework that reframes rebuttal generation as an evidence-centric planning task. Our system decomposes complex feedback into atomic concerns and dynamically constructs hybrid contexts by synthesizing compressed summaries with high-fidelity text while integrating an autonomous and on-demand external search module to resolve concerns requiring outside literature. By generating an inspectable response plan before drafting, $\textbf{RebuttalAgent}$ ensures that every argument is explicitly anchored in internal or external evidence. We validate our approach on the proposed $\textbf{RebuttalBench}$ and demonstrate that our pipeline outperforms strong baselines in coverage, faithfulness, and strategic coherence, offering a transparent and controllable assistant for the peer review process. Code will be released.
Abstract (translated)
撰写有效的反驳是一项高风险任务,不仅要求语言流畅性,还需要精确地将审稿人的意图与论文细节对齐。目前的解决方案通常将其视为直接生成文本的问题,因此容易出现虚构、遗漏批评和缺乏可验证依据等问题。为了解决这些问题,我们引入了**RebuttalAgent**,这是首个以多代理框架为基础的系统,它重新定义反驳生成为一个以证据为中心的规划任务。我们的系统将复杂的反馈分解成原子级的关注点,并通过整合压缩摘要与高保真文本来动态构建混合上下文,同时集成一个自主且按需调用的外部搜索模块,用于解决需要引用外界文献的问题。在起草之前生成可检查的回答计划,**RebuttalAgent** 确保每个论点都明确地基于内部或外部证据。 我们在提出的**RebuttalBench**上验证了我们的方法,并证明了我们这一流程在线条覆盖面、忠实度和策略一致性方面超过了强有力的基线模型,为同行评审过程提供了一个透明且可控制的辅助工具。代码将公开发布。
URL
https://arxiv.org/abs/2601.14171