Abstract
Sonification can provide valuable insights about data but most existing approaches are not designed to be controlled by the user in an interactive fashion. Interactions enable the designer of the sonification to more rapidly experiment with sound design and allow the sonification to be modified in real-time by interacting with various control parameters. In this paper, we describe two case studies of interactive sonification that utilize publicly available datasets that have been described recently in the International Conference on Auditory Display (ICAD). They are from the health and energy domains: electroencephalogram (EEG) alpha wave data and air pollutant data consisting of nitrogen dioxide, sulfur dioxide, carbon monoxide, and ozone. We show how these sonfications can be recreated to support interaction utilizing a general interactive sonification framework built using ChucK, Unity, and Chunity. In addition to supporting typical sonification methods that are common in existing sonification toolkits, our framework introduces novel methods such as supporting discrete events, interleaved playback of multiple data streams for comparison, and using frequency modulation (FM) synthesis in terms of one data attribute modulating another. We also describe how these new functionalities can be used to improve the sonification experience of the two datasets we have investigated.
Abstract (translated)
声化可以将数据的价值提供给用户,但现有的方法并不是为了以交互方式由用户控制而设计的。交互使得声化设计的速度更快,用户可以通过与各种控制参数的交互来实时修改声化。在本文中,我们描述了两个利用最近在人工智能会议(ICAD)上描述的公开可用数据集的交互式声化案例。它们来自健康和能源领域:脑电图(EEG)α波数据和由氮氧化物、二氧化硫、一氧化碳和臭氧组成的空气污染数据。我们展示了如何使用基于ChucK、Unity和Chunity构建的通用交互式声化框架来重新创建这些声化,从而支持交互。除了支持现有声化工具包中常见的声化方法外,我们的框架还引入了诸如支持离散事件、比较多个数据流的中断和基于频率调制(FM)合成等新的功能。我们还描述了这些新功能如何改善我们所研究的两个数据集的声化体验。
URL
https://arxiv.org/abs/2404.08813