Abstract
Automatic personality trait assessment is essential for high-quality human-machine interactions. Systems capable of human behavior analysis could be used for self-driving cars, medical research, and surveillance, among many others. We present a multimodal deep neural network with a Siamese extension for apparent personality trait prediction trained on short video recordings and exploiting modality invariant embeddings. Acoustic, visual, and textual information are utilized to reach high-performance solutions in this task. Due to the highly centralized target distribution of the analyzed dataset, the changes in the third digit are relevant. Our proposed method addresses the challenge of under-represented extreme values, achieves 0.0033 MAE average improvement, and shows a clear advantage over the baseline multimodal DNN without the introduced module.
Abstract (translated)
自动个性特质评估对于高质量的人机交互至关重要。具有人类行为分析能力的系统可以用于自动驾驶汽车、医学研究和监视等领域。我们提出了一个具有Siamese扩展的多模态深度神经网络,用于基于短视频录制的表象个性特质预测,并利用模态不变嵌入。我们利用音频、视觉和文本信息来达到这一任务的高性能解决方案。由于分析数据集的集中目标分布,第三位数字的变革至关重要。我们提出的方法解决了代表性不足的极端值挑战,实现了0.0033 MAE平均改进,并表明与引入模块的基线多模态DNN相比具有明显优势。
URL
https://arxiv.org/abs/2405.03846