Abstract
Few-shot Multi-label Intent Detection (MID) is crucial for dialogue systems, aiming to detect multiple intents of utterances in low-resource dialogue domains. Previous studies focus on a two-stage pipeline. They first learn representations of utterances with multiple labels and then use a threshold-based strategy to identify multi-label results. However, these methods rely on representation classification and ignore instance relations, leading to error propagation. To solve the above issues, we propose a multi-label joint learning method for few-shot MID in an end-to-end manner, which constructs an instance relation learning network with label knowledge propagation to eliminate error propagation. Concretely, we learn the interaction relations between instances with class information to propagate label knowledge between a few labeled (support set) and unlabeled (query set) instances. With label knowledge propagation, the relation strength between instances directly indicates whether two utterances belong to the same intent for multi-label prediction. Besides, a dual relation-enhanced loss is developed to optimize support- and query-level relation strength to improve performance. Experiments show that we outperform strong baselines by an average of 9.54% AUC and 11.19% Macro-F1 in 1-shot scenarios.
Abstract (translated)
少量样本多标签意图检测(MID)对于对话系统至关重要,其目标是在低资源对话场景中识别话语的多个意图。以往的研究主要集中在两阶段流水线方法上:首先学习具有多重标签的话语表示,然后使用基于阈值的方法来确定多标签结果。然而,这些方法依赖于表征分类,忽视了实例之间的关系,导致错误传播的问题。 为了解决上述问题,我们提出了一种用于少量样本MID的端到端多标签联合学习方法。该方法构建了一个通过标签知识传播消除错误传播的实例关系学习网络。具体而言,我们利用类别信息来学习实例间的交互关系,并将标签知识从少量标注的数据(支持集)传递给未标注的数据(查询集)。借助于标签知识传播,两个实例之间的关系强度直接指示这两个话语是否属于相同的意图,这对于多标签预测至关重要。 此外,开发了一种双关系增强损失函数,以优化支持集和查询集中实例间的关系强度,从而提升性能。实验结果显示,在1-shot场景中,与强大的基线相比,我们的方法在AUC(面积下曲线)指标上平均高出9.54%,在Macro-F1(宏F1分数)指标上平均高出11.19%。
URL
https://arxiv.org/abs/2510.07776