Abstract
Technological advancements in web platforms allow people to express and share emotions towards textual write-ups written and shared by others. This brings about different interesting domains for analysis; emotion expressed by the writer and emotion elicited from the readers. In this paper, we propose a novel approach for Readers' Emotion Detection from short-text documents using a deep learning model called REDAffectiveLM. Within state-of-the-art NLP tasks, it is well understood that utilizing context-specific representations from transformer-based pre-trained language models helps achieve improved performance. Within this affective computing task, we explore how incorporating affective information can further enhance performance. Towards this, we leverage context-specific and affect enriched representations by using a transformer-based pre-trained language model in tandem with affect enriched Bi-LSTM+Attention. For empirical evaluation, we procure a new dataset REN-20k, besides using RENh-4k and SemEval-2007. We evaluate the performance of our REDAffectiveLM rigorously across these datasets, against a vast set of state-of-the-art baselines, where our model consistently outperforms baselines and obtains statistically significant results. Our results establish that utilizing affect enriched representation along with context-specific representation within a neural architecture can considerably enhance readers' emotion detection. Since the impact of affect enrichment specifically in readers' emotion detection isn't well explored, we conduct a detailed analysis over affect enriched Bi-LSTM+Attention using qualitative and quantitative model behavior evaluation techniques. We observe that compared to conventional semantic embedding, affect enriched embedding increases ability of the network to effectively identify and assign weightage to key terms responsible for readers' emotion detection.
Abstract (translated)
网络平台上的技术进步使得人们能够表达和分享对他人撰写的文字报告的感受。这带来了不同的分析领域;作者表达的情感和读者从读者中获取的情感。在本文中,我们提出了一种利用深度学习模型称为REDAffectiveLM的方法,从短文本文档中识别读者的情感。在先进的自然语言处理任务中,显然利用基于Transformer的前馈语言模型的上下文特定表示有助于改善性能。在这个情感计算任务中,我们探索了如何结合情感信息进一步增强性能。为此,我们利用基于Transformer的前馈语言模型与情感丰富Bi-LSTM+Attention协同工作,以进行经验验证。我们对这些数据集进行了严格的比较,并将其与一系列最先进的基准进行比较,其中我们的模型 consistently outperform 基准模型并获得了统计显著的结果。我们的结果表明,在神经网络架构中利用情感丰富表示和上下文特定表示可以显著增强读者的情感识别能力。由于专门的情感增强在读者情感识别中的影响尚未充分研究,我们使用定性和定量模型行为评估技术,详细分析了情感丰富Bi-LSTM+Attention的影响。我们观察到,与传统的语义嵌入相比,情感丰富嵌入可以增加网络有效地识别和分配给读者情感识别关键术语的能力。
URL
https://arxiv.org/abs/2301.08995