Abstract
Abstractive summary generation is a challenging task that requires the model to comprehend the source text and generate a concise and coherent summary that captures the essential information. In this paper, we explore the use of an encoder/decoder approach for abstractive summary generation in the Urdu language. We employ a transformer-based model that utilizes self-attention mechanisms to encode the input text and generate a summary. Our experiments show that our model can produce summaries that are grammatically correct and semantically meaningful. We evaluate our model on a publicly available dataset and achieve state-of-the-art results in terms of Rouge scores. We also conduct a qualitative analysis of our model's output to assess its effectiveness and limitations. Our findings suggest that the encoder/decoder approach is a promising method for abstractive summary generation in Urdu and can be extended to other languages with suitable modifications.
Abstract (translated)
摘要生成是一种挑战性的任务,要求模型理解源文本,生成简洁、连贯的摘要,提取关键信息。在本文中,我们探讨了在土耳其语中进行摘要生成的encoder/decoder方法。我们使用基于Transformer的模型,利用自注意力机制将输入文本编码并生成摘要。我们的实验结果表明,我们的模型可以生成语法正确、语义有意义的摘要。我们使用公开数据集对模型进行评估,取得了 Rouge 评分上最先进的结果。我们还对模型的输出进行定性分析,以评估其有效性和局限性。我们的发现表明,encoder/decoder方法在土耳其语中的摘要生成是一项有前途的方法,可以与其他语言进行适当的修改扩展。
URL
https://arxiv.org/abs/2305.16195