Abstract
In today's digital world, seeking answers to health questions on the Internet is a common practice. However, existing question answering (QA) systems often rely on using pre-selected and annotated evidence documents, thus making them inadequate for addressing novel questions. Our study focuses on the open-domain QA setting, where the key challenge is to first uncover relevant evidence in large knowledge bases. By utilizing the common retrieve-then-read QA pipeline and PubMed as a trustworthy collection of medical research documents, we answer health questions from three diverse datasets. We modify different retrieval settings to observe their influence on the QA pipeline's performance, including the number of retrieved documents, sentence selection process, the publication year of articles, and their number of citations. Our results reveal that cutting down on the amount of retrieved documents and favoring more recent and highly cited documents can improve the final macro F1 score up to 10%. We discuss the results, highlight interesting examples, and outline challenges for future research, like managing evidence disagreement and crafting user-friendly explanations.
Abstract (translated)
在当今的数字世界中,通过互联网寻求健康问题的答案是一种常见的行为。然而,现有的问题回答(QA)系统通常依赖于使用预先选择和注释的证据文献,因此它们对解决新颖问题是不够的。我们的研究关注的是开放领域的QA设置,其中关键挑战是首先在大型知识库中发掘相关的证据。通过利用常见的检索-然后阅读QA管道和PubMed作为值得信赖的医疗研究文献集合,我们从三个不同的数据集中回答健康问题。我们修改不同的检索设置,以观察它们对QA管道性能的影响,包括检索到的文档数量、句子选择过程、文章的出版年份以及它们的引用次数。我们的结果表明,减少检索到的文档数量并倾向于选择更多最新和高度引用的文档可以提高最终宏观F1得分至10%。我们讨论了结果,重点有趣的例子,并概述了未来研究的挑战,如处理证据分歧和构建用户友好的解释。
URL
https://arxiv.org/abs/2404.08359