Abstract
Scene graph generation (SGG) endeavors to predict visual relationships between pairs of objects within an image. Prevailing SGG methods traditionally assume a one-off learning process for SGG. This conventional paradigm may necessitate repetitive training on all previously observed samples whenever new relationships emerge, mitigating the risk of forgetting previously acquired knowledge. This work seeks to address this pitfall inherent in a suite of prior relationship predictions. Motivated by the achievements of in-context learning in pretrained language models, our approach imbues the model with the capability to predict relationships and continuously acquire novel knowledge without succumbing to catastrophic forgetting. To achieve this goal, we introduce a novel and pragmatic framework for scene graph generation, namely Lifelong Scene Graph Generation (LSGG), where tasks, such as predicates, unfold in a streaming fashion. In this framework, the model is constrained to exclusive training on the present task, devoid of access to previously encountered training data, except for a limited number of exemplars, but the model is tasked with inferring all predicates it has encountered thus far. Rigorous experiments demonstrate the superiority of our proposed method over state-of-the-art SGG models in the context of LSGG across a diverse array of metrics. Besides, extensive experiments on the two mainstream benchmark datasets, VG and Open-Image(v6), show the superiority of our proposed model to a number of competitive SGG models in terms of continuous learning and conventional settings. Moreover, comprehensive ablation experiments demonstrate the effectiveness of each component in our model.
Abstract (translated)
场景图生成(SGG)旨在预测图像中物体对之间的视觉关系。预先存在的SGG方法通常假设SGG需要一次性的学习过程。这种传统范式可能需要在所有之前观察到的样本上进行重复训练,从而减轻忘记之前获得知识的风险。本文旨在解决一系列先前的关系预测中的这一缺陷。受到预训练语言模型在上下文学习方面的成就的启发,我们的方法使模型具有预测关系的能力,并且在不失记忆的情况下持续获得新知识。为了实现这一目标,我们引入了一个新颖而实用的框架,即终身场景图生成(LSGG),其中任务以流式方式展开。在这种框架下,模型被限制在仅对当前任务的训练上,而无法访问先前的训练数据,除了有限数量的示例,但模型负责推断它所遇到的全部关系。严格的实验证明,在LSGG的背景下,我们提出的方法在各种指标上优于最先进的SGG模型。此外,对VG和Open-Image(v6)这两大主流基准数据集的广泛实验证明,我们提出的模型在持续学习和传统设置方面优于许多竞争的SGG模型。最后,全面的消融实验证明了每个组件在我們的方法中的有效性。
URL
https://arxiv.org/abs/2401.14626