Abstract
Automated content filtering and moderation is an important tool that allows online platforms to build striving user communities that facilitate cooperation and prevent abuse. Unfortunately, resourceful actors try to bypass automated filters in a bid to post content that violate platform policies and codes of conduct. To reach this goal, these malicious actors may obfuscate policy violating images (e.g. overlay harmful images by carefully selected benign images or visual patterns) to prevent machine learning models from reaching the correct decision. In this paper, we invite researchers to tackle this specific issue and present a new image benchmark. This benchmark, based on ImageNet, simulates the type of obfuscations created by malicious actors. It goes beyond ImageNet-$\textrm{C}$ and ImageNet-$\bar{\textrm{C}}$ by proposing general, drastic, adversarial modifications that preserve the original content intent. It aims to tackle a more common adversarial threat than the one considered by $\ell_p$-norm bounded adversaries. We evaluate 33 pretrained models on the benchmark and train models with different augmentations, architectures and training methods on subsets of the obfuscations to measure generalization. We hope this benchmark will encourage researchers to test their models and methods and try to find new approaches that are more robust to these obfuscations.
Abstract (translated)
自动内容过滤和 moderation 是一个重要的工具,允许在线平台建立努力的用户社区,促进合作并防止滥用。不幸的是,有资源的演员试图绕过自动化过滤器,以发布违反平台政策和行为代码的内容。为了实现这一目标,这些恶意演员可能会混淆违反政策的图像(例如,精心选择良性图像或视觉模式,将它们叠加到有害图像上),以避免机器学习模型获得正确的决策。在本文中,我们邀请研究人员解决这个具体的问题并提出一个新的图像基准。这个基准基于 ImageNet,模拟了恶意演员创建的混淆类型。它超越了 ImageNet-$ extrm{C}$ 和 ImageNet-$ar{ extrm{C}}$,提出了一般性、严重得多的dversarial 修改,以保留原始内容意图。它旨在应对比 $ell_p$-norm 限制的dversarial 威胁更为常见的威胁。我们在该基准上评估了 33 个预训练模型,并在混淆的子集上训练不同增强度、架构和训练方法的模型,以衡量泛化能力。我们希望这个基准将鼓励研究人员测试他们的模型和方法,并寻找对这些混淆更为鲁棒的新方法。
URL
https://arxiv.org/abs/2301.12993