Abstract
Conversational search requires accurate interpretation of user intent from complex multi-turn contexts. This paper presents ChatRetriever, which inherits the strong generalization capability of large language models to robustly represent complex conversational sessions for dense retrieval. To achieve this, we propose a simple and effective dual-learning approach that adapts LLM for retrieval via contrastive learning while enhancing the complex session understanding through masked instruction tuning on high-quality conversational instruction tuning data. Extensive experiments on five conversational search benchmarks demonstrate that ChatRetriever substantially outperforms existing conversational dense retrievers, achieving state-of-the-art performance on par with LLM-based rewriting approaches. Furthermore, ChatRetriever exhibits superior robustness in handling diverse conversational contexts. Our work highlights the potential of adapting LLMs for retrieval with complex inputs like conversational search sessions and proposes an effective approach to advance this research direction.
Abstract (translated)
对话搜索需要从复杂的多轮对话背景下准确理解用户意图。本文提出ChatRetriever,它继承了大语言模型的强大泛化能力,用于稳健地表示复杂对话以进行密集检索。为此,我们提出了一种简单而有效的双重学习方法,通过在高质量对话指令调整数据上进行遮罩指令微调,将LLM用于检索,同时增强复杂对话会话的理解。在五个对话搜索基准上的大量实验证明,ChatRetriever显著优于现有对话密集检索器,在LLM基于重写方法的性能水平上实现了最先进的性能。此外,ChatRetriever在处理多样对话上下文方面表现出卓越的鲁棒性。我们的工作突出了将LLM用于对话搜索具有复杂输入的可能性,并提出了一种有效的方法来推动这一研究方向的发展。
URL
https://arxiv.org/abs/2404.13556