Paper Reading AI Learner

An Analysis of Emotion Communication Channels in Fan Fiction: Towards Emotional Storytelling

2019-06-06 03:53:58
Evgeny Kim, Roman Klinger

Abstract

Centrality of emotion for the stories told by humans is underpinned by numerous studies in literature and psychology. The research in automatic storytelling has recently turned towards emotional storytelling, in which characters' emotions play an important role in the plot development. However, these studies mainly use emotion to generate propositional statements in the form "A feels affection towards B" or "A confronts B". At the same time, emotional behavior does not boil down to such propositional descriptions, as humans display complex and highly variable patterns in communicating their emotions, both verbally and non-verbally. In this paper, we analyze how emotions are expressed non-verbally in a corpus of fan fiction short stories. Our analysis shows that stories written by humans convey character emotions along various non-verbal channels. We find that some non-verbal channels, such as facial expressions and voice characteristics of the characters, are more strongly associated with joy, while gestures and body postures are more likely to occur with trust. Based on our analysis, we argue that automatic storytelling systems should take variability of emotion into account when generating descriptions of characters' emotions.

Abstract (translated)

人类所讲故事的情感中心性是文学和心理学众多研究的基础。近年来,自动叙事的研究转向了情感叙事,其中人物情感在情节发展中起着重要作用。然而,这些研究主要是利用情感来产生“A对B的情感”或“A对B的对抗”形式的命题陈述。同时,情感行为并不能归结为这样的命题描述,因为人类在口头和非口头的情感交流中表现出复杂多变的模式。本文分析了粉丝小说短篇小说中情感的非言语表达方式。我们的分析表明,人类写的故事通过各种非语言渠道传达人物情感。我们发现,一些非语言的渠道,如面部表情和人物的声音特征,更强烈地与快乐联系在一起,而手势和身体姿势更容易与信任发生。基于我们的分析,我们认为,自动讲故事系统应该在生成人物情绪描述时考虑到情绪的可变性。

URL

https://arxiv.org/abs/1906.02402

PDF

https://arxiv.org/pdf/1906.02402.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot