Abstract
Using extensive training data from SA-1B, the Segment Anything Model (SAM) has demonstrated exceptional generalization and zero-shot capabilities, attracting widespread attention in areas such as medical image segmentation and remote sensing image segmentation. However, its performance in the field of image manipulation detection remains largely unexplored and unconfirmed. There are two main challenges in applying SAM to image manipulation detection: a) reliance on manual prompts, and b) the difficulty of single-view information in supporting cross-dataset generalization. To address these challenges, we develops a cross-view prompt learning paradigm called IMDPrompter based on SAM. Benefiting from the design of automated prompts, IMDPrompter no longer relies on manual guidance, enabling automated detection and localization. Additionally, we propose components such as Cross-view Feature Perception, Optimal Prompt Selection, and Cross-View Prompt Consistency, which facilitate cross-view perceptual learning and guide SAM to generate accurate masks. Extensive experimental results from five datasets (CASIA, Columbia, Coverage, IMD2020, and NIST16) validate the effectiveness of our proposed method.
Abstract (translated)
通过使用来自SA-1B的广泛训练数据,段一切模型(Segment Anything Model,SAM)已经展示了卓越的泛化能力和零样本学习能力,在医学图像分割和遥感图像分割等领域引起了广泛关注。然而,其在图像篡改检测领域的性能仍然鲜为人知且未经证实。将SAM应用于图像篡改检测面临两大主要挑战:一是依赖于手动提示,二是单视图信息难以支持跨数据集的泛化。 为了解决这些问题,我们基于SAM开发了一种称为IMDPrompter的跨视角提示学习范式。得益于自动化提示的设计,IMDPrompter不再需要人工指导,从而能够实现自动化的检测和定位功能。此外,我们还提出了诸如跨视图特征感知、最优提示选择以及跨视图提示一致性等组件,这些设计有助于促进跨视图感知学习,并引导SAM生成准确的掩膜。 来自五个数据集(CASIA、哥伦比亚大学、Coverage、IMD2020和NIST16)的大量实验结果验证了我们方法的有效性。
URL
https://arxiv.org/abs/2502.02454