Abstract
Heterogeneous graph neural networks have recently gained attention for long document summarization, modeling the extraction as a node classification task. Although effective, these models often require external tools or additional machine learning models to define graph components, producing highly complex and less intuitive structures. We present GraphLSS, a heterogeneous graph construction for long document extractive summarization, incorporating Lexical, Structural, and Semantic features. It defines two levels of information (words and sentences) and four types of edges (sentence semantic similarity, sentence occurrence order, word in sentence, and word semantic similarity) without any need for auxiliary learning models. Experiments on two benchmark datasets show that GraphLSS is competitive with top-performing graph-based methods, outperforming recent non-graph models. We release our code on GitHub.
Abstract (translated)
异构图神经网络最近在长文档摘要方面受到了关注,将提取过程建模为节点分类任务。尽管这些模型有效,但它们通常需要外部工具或额外的机器学习模型来定义图组件,产生高度复杂且不直观的结构。我们提出了GraphLSS,这是一种用于长文档抽取式摘要的异构图构建方法,集成了词汇、结构和语义特征。它定义了两个层次的信息(单词和句子)以及四种类型的边(句子语义相似度、句子出现顺序、句中词、词的语义相似度),并且无需任何辅助学习模型。在两个基准数据集上的实验表明,GraphLSS与顶级图基方法具有竞争力,并且优于最近的非图模型。我们已在GitHub上发布了代码。
URL
https://arxiv.org/abs/2410.21315