Abstract
People feel emotions when listening to music. However, emotions are not tangible objects that can be exploited in the music composition process as they are difficult to capture and quantify in algorithms. We present a novel musical interface, Mugeetion, designed to capture occurring instances of emotional states from users' facial gestures and relay that data to associated musical features. Mugeetion can translate qualitative data of emotional states into quantitative data, which can be utilized in the sound generation process. We also presented and tested this work in the exhibition of sound installation, Hearing Seascape, using the audiences' facial expressions. Audiences heard changes in the background sound based on their emotional state. The process contributes multiple research areas, such as gesture tracking systems, emotion-sound modeling, and the connection between sound and facial gesture.
Abstract (translated)
人们在听音乐时会感受到情绪。然而,情绪不是可以在音乐创作过程中利用的有形物体,因为它们难以在算法中捕捉和量化。我们提供了一种新颖的音乐界面Mugeetion,旨在从用户的面部姿势中捕捉情绪状态的发生情况,并将该数据传递给相关的音乐特征。 Mugeetion可以将情绪状态的定性数据转换为定量数据,可以在声音生成过程中使用。我们还在声音装置,听觉海景展览中使用观众的面部表情展示和测试了这项工作。观众根据情绪状态听到背景声音的变化。该过程贡献了多个研究领域,例如手势跟踪系统,情感声音建模以及声音和面部姿势之间的联系。
URL
https://arxiv.org/abs/1809.05502