Paper Reading AI Learner

IMDPrompter: Adapting SAM to Image Manipulation Detection by Cross-View Automated Prompt Learning

2025-02-04 16:20:41
Quan Zhang, Yuxin Qi, Xi Tang, Jinwei Fang, Xi Lin, Ke Zhang, Chun Yuan

Abstract

Using extensive training data from SA-1B, the Segment Anything Model (SAM) has demonstrated exceptional generalization and zero-shot capabilities, attracting widespread attention in areas such as medical image segmentation and remote sensing image segmentation. However, its performance in the field of image manipulation detection remains largely unexplored and unconfirmed. There are two main challenges in applying SAM to image manipulation detection: a) reliance on manual prompts, and b) the difficulty of single-view information in supporting cross-dataset generalization. To address these challenges, we develops a cross-view prompt learning paradigm called IMDPrompter based on SAM. Benefiting from the design of automated prompts, IMDPrompter no longer relies on manual guidance, enabling automated detection and localization. Additionally, we propose components such as Cross-view Feature Perception, Optimal Prompt Selection, and Cross-View Prompt Consistency, which facilitate cross-view perceptual learning and guide SAM to generate accurate masks. Extensive experimental results from five datasets (CASIA, Columbia, Coverage, IMD2020, and NIST16) validate the effectiveness of our proposed method.

Abstract (translated)

通过使用来自SA-1B的广泛训练数据,段一切模型(Segment Anything Model,SAM)已经展示了卓越的泛化能力和零样本学习能力,在医学图像分割和遥感图像分割等领域引起了广泛关注。然而,其在图像篡改检测领域的性能仍然鲜为人知且未经证实。将SAM应用于图像篡改检测面临两大主要挑战:一是依赖于手动提示,二是单视图信息难以支持跨数据集的泛化。 为了解决这些问题,我们基于SAM开发了一种称为IMDPrompter的跨视角提示学习范式。得益于自动化提示的设计,IMDPrompter不再需要人工指导,从而能够实现自动化的检测和定位功能。此外,我们还提出了诸如跨视图特征感知、最优提示选择以及跨视图提示一致性等组件,这些设计有助于促进跨视图感知学习,并引导SAM生成准确的掩膜。 来自五个数据集(CASIA、哥伦比亚大学、Coverage、IMD2020和NIST16)的大量实验结果验证了我们方法的有效性。

URL

https://arxiv.org/abs/2502.02454

PDF

https://arxiv.org/pdf/2502.02454.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot