Abstract
Embodied Reference Understanding requires identifying a target object in a visual scene based on both language instructions and pointing cues. While prior works have shown progress in open-vocabulary object detection, they often fail in ambiguous scenarios where multiple candidate objects exist in the scene. To address these challenges, we propose a novel ERU framework that jointly leverages LLM-based data augmentation, depth-map modality, and a depth-aware decision module. This design enables robust integration of linguistic and embodied cues, improving disambiguation in complex or cluttered environments. Experimental results on two datasets demonstrate that our approach significantly outperforms existing baselines, achieving more accurate and reliable referent detection.
Abstract (translated)
基于语言指令和指向线索识别视觉场景中的目标对象是具身参考理解(Embodied Reference Understanding,简称ERU)的关键。尽管先前的研究在开放词汇对象检测方面取得了进展,但在存在多个候选对象的模糊场景中往往表现不佳。为了解决这些挑战,我们提出了一种新的ERU框架,该框架结合了基于大型语言模型的数据增强、深度图模态和深度感知决策模块。这种设计能够将语言线索和具身线索(如指向)进行稳健集成,在复杂或凌乱的环境中提高目标识别的准确性。 实验结果表明,我们的方法在两个数据集上显著优于现有的基线方法,实现了更准确和可靠的指代检测。
URL
https://arxiv.org/abs/2510.08278