Abstract
In this paper we deal with the task of Disturbing Image Detection (DID), exploiting knowledge encoded in Large Multimodal Models (LMMs). Specifically, we propose to exploit LMM knowledge in a two-fold manner: first by extracting generic semantic descriptions, and second by extracting elicited emotions. Subsequently, we use the CLIP's text encoder in order to obtain the text embeddings of both the generic semantic descriptions and LMM-elicited emotions. Finally, we use the aforementioned text embeddings along with the corresponding CLIP's image embeddings for performing the DID task. The proposed method significantly improves the baseline classification accuracy, achieving state-of-the-art performance on the augmented Disturbing Image Detection dataset.
Abstract (translated)
在本文中,我们处理干扰图像检测(DID)的任务,利用大型多模态模型(LMM)中编码的知识。具体来说,我们通过两种方式利用LMM的知识:首先通过提取通用的语义描述,其次是通过提取预期情感。随后,我们使用CLIP的文本编码器来获得通用语义描述和LMM引起的情感的文本嵌入。最后,我们将上述文本嵌入与相应的CLIP图像嵌入一起用于执行DID任务。与基线分类准确率相比,所提出的方法显著提高了基线分类准确率,在增强的DID数据集上实现了最先进的性能。
URL
https://arxiv.org/abs/2406.12668