Abstract
We present a novel multimodal dataset for Cognitive Load Assessment in REaltime (CLARE). The dataset contains physiological and gaze data from 24 participants with self-reported cognitive load scores as ground-truth labels. The dataset consists of four modalities, namely, Electrocardiography (ECG), Electrodermal Activity (EDA), Electroencephalogram (EEG), and Gaze tracking. To map diverse levels of mental load on participants during experiments, each participant completed four nine-minutes sessions on a computer-based operator performance and mental workload task (the MATB-II software) with varying levels of complexity in one minute segments. During the experiment, participants reported their cognitive load every 10 seconds. For the dataset, we also provide benchmark binary classification results with machine learning and deep learning models on two different evaluation schemes, namely, 10-fold and leave-one-subject-out (LOSO) cross-validation. Benchmark results show that for 10-fold evaluation, the convolutional neural network (CNN) based deep learning model achieves the best classification performance with ECG, EDA, and Gaze. In contrast, for LOSO, the best performance is achieved by the deep learning model with ECG, EDA, and EEG.
Abstract (translated)
我们在REaltime (CLARE)中提出了一个新颖的多模态数据集,用于Cognitive Load Assessment。数据集包含来自24名自我报告认知负载分数的参与者的生理和眼动数据作为地面真实标签。数据集包括四个模块,分别是心电图(ECG)、电生理活动(EDA)、脑电图(EEG)和眼动跟踪。为了在实验中映射参与者对心理负载的不同水平,每位参与者在一分钟段的计算机基础操作员绩效和心理工作量任务(MATB-II软件)上完成了四个九分钟的任务。在实验期间,参与者每10秒钟报告他们的认知负载。为了这个数据集,我们还提供了用机器学习和深度学习模型在两种不同的评估方案下的基准二分类结果,即10倍交叉验证和单subject-out (LOSO)交叉验证。基准结果表明,在10倍评估中,基于卷积神经网络(CNN)的深度学习模型通过ECG、EDA和眼动实现了最佳分类性能。相反,在LOSO评估中,实现最佳性能的是具有ECG、EDA和EEG的深度学习模型。
URL
https://arxiv.org/abs/2404.17098