Abstract
Many real-world applications require surfacing extracted snippets to users, whether motivated by assistive tools for literature surveys or document cross-referencing, or needs to mitigate and recover from model generated inaccuracies., Yet, these passages can be difficult to consume when divorced from their original document context. In this work, we explore the limits of LLMs to perform decontextualization of document snippets in user-facing scenarios, focusing on two real-world settings - question answering and citation context previews for scientific documents. We propose a question-answering framework for decontextualization that allows for better handling of user information needs and preferences when determining the scope of rewriting. We present results showing state-of-the-art LLMs under our framework remain competitive with end-to-end approaches. We also explore incorporating user preferences into the system, finding our framework allows for controllability.
Abstract (translated)
许多现实世界的应用需要将提取的片段呈现给用户,无论是从文献调研的辅助工具还是文档交叉引用的动机出发,或者需要减轻和恢复模型产生的不准确之处。然而,将这些片段从原始文档上下文中分离开来可能会使其难以被消费。在这项工作中,我们探索了LLM在用户面对的场景下进行文档片段脱上下文化的极限,重点关注了两个实际场景——科学文档的问题回答和引用上下文预览。我们提出了一种问答框架,用于脱上下文化,以便更好地处理用户的信息和偏好,在确定改写范围时。我们呈现了结果,表明我们框架下的LLM在性能方面仍然与端到端方法竞争。我们还探索了将用户偏好融入系统中,发现我们的框架可以实现控制。
URL
https://arxiv.org/abs/2305.14772