Abstract
Multi-label learning (MLL) requires comprehensive multi-semantic annotations that is hard to fully obtain, thus often resulting in missing labels scenarios. In this paper, we investigate Single Positive Multi-label Learning (SPML), where each image is associated with merely one positive label. Existing SPML methods only focus on designing losses using mechanisms such as hard pseudo-labeling and robust losses, mostly leading to unacceptable false negatives. To address this issue, we first propose a generalized loss framework based on expected risk minimization to provide soft pseudo labels, and point out that the former losses can be seamlessly converted into our framework. In particular, we design a novel robust loss based on our framework, which enjoys flexible coordination between false positives and false negatives, and can additionally deal with the imbalance between positive and negative samples. Extensive experiments show that our approach can significantly improve SPML performance and outperform the vast majority of state-of-the-art methods on all the four benchmarks.
Abstract (translated)
多标签学习(MLL)需要全面的多语义注释,这很难完全获得,因此通常会导致缺失标签的情况。在本文中,我们研究了 Single Positive Multi-label Learning(SPML),其中每个图像仅与一个正标签相关联。现有的SPML方法主要关注通过诸如硬伪标签和鲁棒损失机制来设计损失,导致大部分情况下出现不可接受的反假负。为了应对这个问题,我们首先提出了一个基于期望风险最小化的通则损失框架,以提供软伪标签,并指出前一种损失可以无缝转换到我们的框架中。特别地,我们根据我们的框架设计了一个新颖的鲁棒损失,该损失在 false positives 和 false negatives 之间具有灵活的协调,并可以处理 positive 和 negative 样本之间的不平衡。大量实验证明,我们的方法可以显著提高SPML性能,并在所有四个基准上超越了大多数最先进的 methods。
URL
https://arxiv.org/abs/2405.03501