Abstract
Keyphrase ranking plays a crucial role in information retrieval and summarization by indexing and retrieving relevant information efficiently. Advances in natural language processing, especially large language models (LLMs), have improved keyphrase extraction and ranking. However, traditional methods often overlook diversity, resulting in redundant keyphrases. We propose a novel approach using Submodular Function Optimization (SFO) to balance relevance and diversity in keyphrase ranking. By framing the task as submodular maximization, our method selects diverse and representative keyphrases. Experiments on benchmark datasets show that our approach outperforms existing methods in both relevance and diversity metrics, achieving SOTA performance in execution time. Our code is available online.
Abstract (translated)
关键词排序在信息检索和摘要生成中扮演着重要角色,通过高效地索引和检索相关信息。自然语言处理的进步,特别是大型语言模型(LLMs),已经改善了关键词提取和排名。然而,传统方法常常忽视多样性,导致冗余的关键词。我们提出了一种新颖的方法,使用次模函数优化(SFO)来平衡关键词排序中的相关性和多样性。通过将任务构造成次模最大化问题,我们的方法选择多样且具代表性的关键词。在基准数据集上的实验表明,我们在相关性和多样性指标上都优于现有方法,并实现了执行时间的最新技术水平。我们的代码可以在网上获取。
URL
https://arxiv.org/abs/2410.20080