Abstract
Automatic citation generation for sentences in a document or report is paramount for intelligence analysts, cybersecurity, news agencies, and education personnel. In this research, we investigate whether large language models (LLMs) are capable of generating references based on two forms of sentence queries: (a) Direct Queries, LLMs are asked to provide author names of the given research article, and (b) Indirect Queries, LLMs are asked to provide the title of a mentioned article when given a sentence from a different article. To demonstrate where LLM stands in this task, we introduce a large dataset called REASONS comprising abstracts of the 12 most popular domains of scientific research on arXiv. From around 20K research articles, we make the following deductions on public and proprietary LLMs: (a) State-of-the-art, often called anthropomorphic GPT-4 and GPT-3.5, suffers from high pass percentage (PP) to minimize the hallucination rate (HR). When tested with this http URL (7B), they unexpectedly made more errors; (b) Augmenting relevant metadata lowered the PP and gave the lowest HR; (c) Advance retrieval-augmented generation (RAG) using Mistral demonstrates consistent and robust citation support on indirect queries and matched performance to GPT-3.5 and GPT-4. The HR across all domains and models decreased by an average of 41.93% and the PP was reduced to 0% in most cases. In terms of generation quality, the average F1 Score and BLEU were 68.09% and 57.51%, respectively; (d) Testing with adversarial samples showed that LLMs, including the Advance RAG Mistral, struggle to understand context, but the extent of this issue was small in Mistral and GPT-4-Preview. Our study con tributes valuable insights into the reliability of RAG for automated citation generation tasks.
Abstract (translated)
为情报分析员、网络安全人员、新闻机构和教育工作者,自动引用文献中的句子至关重要。在这项研究中,我们调查了大型语言模型(LLMs)是否能够根据两种句子查询形式生成引用: (a)直接查询,LLM被要求提供给定研究文章的作者姓名;(b)间接查询,当给定一个来自不同文章的句子时,LLM被要求提供提及的文章标题。为了证明LLM在这项任务中的地位,我们引入了一个大型数据集REASONS,其包括arXiv上最热门的12个科学研究领域摘要。从大约20K篇研究论文中,我们做出了以下推断:(a)最先进的、被称为类人化的GPT-4和GPT-3.5,存在高通过率(PP)问题,以最小化幻觉率(HR)。当用这个url(7B)进行测试时,它们出人意料地犯了更多的错误;(b)增加相关元数据降低了PP,并提供了最低的HR;(c) Mistral使用 Advance Retrieval-Augmented Generation (RAG) 展示了在间接查询和GPT-3.5及GPT-4上的匹配性能和一致性支持。所有领域和模型的HR下降了平均41.93%,而PP在大多数情况下降至0%。在生成质量方面,平均的F1分数和BLEU分别为68.09%和57.51%。(d) 使用对抗样本测试表明,包括Advance RAG Mistral在内的LLM在理解上下文方面遇到困难,但Mistral和GPT-4-Preview中的这个问题程度较小。我们的研究为自动引用生成任务的可靠性提供了宝贵的见解。
URL
https://arxiv.org/abs/2405.02228