Abstract
In this paper, we explore the unique modality of sketch for explainability, emphasising the profound impact of human strokes compared to conventional pixel-oriented studies. Beyond explanations of network behavior, we discern the genuine implications of explainability across diverse downstream sketch-related tasks. We propose a lightweight and portable explainability solution -- a seamless plugin that integrates effortlessly with any pre-trained model, eliminating the need for re-training. Demonstrating its adaptability, we present four applications: highly studied retrieval and generation, and completely novel assisted drawing and sketch adversarial attacks. The centrepiece to our solution is a stroke-level attribution map that takes different forms when linked with downstream tasks. By addressing the inherent non-differentiability of rasterisation, we enable explanations at both coarse stroke level (SLA) and partial stroke level (P-SLA), each with its advantages for specific downstream tasks.
Abstract (translated)
在本文中,我们探讨了可解释性绘图的独特维度,强调人类笔触与传统像素导向研究的深刻影响。除了网络行为的解释外,我们分辨出可解释性在各种下游绘图相关任务中的真正含义。我们提出了一个轻量级且便携的 explainability 解决方案--无缝插件,可轻松地集成到任何预训练模型中,无需重新训练。展示其适应性,我们提出了四个应用:高度研究过的检索和生成,以及完全新颖的辅助绘图和绘图对抗攻击。我们解决方案的核心是一个在连接到下游任务时具有不同形式的笔触级别归因图。通过解决平滑映射固有的不可分性,我们使得解释在粗笔级别(SLA)和部分笔级别(P-SLA)上都具有优势, each with its advantages for specific downstream tasks.
URL
https://arxiv.org/abs/2403.09480