Abstract
We present an interactive system enabling users to manipulate images to explore the robustness and sensitivity of deep learning image classifiers. Using modern web technologies to run in-browser inference, users can remove image features using inpainting algorithms and obtain new classifications in real time, which allows them to ask a variety of "what if" questions by experimentally modifying images and seeing how the model reacts. Our system allows users to compare and contrast what image regions humans and machine learning models use for classification, revealing a wide range of surprising results ranging from spectacular failures (e.g., a "water bottle" image becomes a "concert" when removing a person) to impressive resilience (e.g., a "baseball player" image remains correctly classified even without a glove or base). We demonstrate our system at The 2018 Conference on Computer Vision and Pattern Recognition (CVPR) for the audience to try it live. Our system is open-sourced at https://github.com/poloclub/interactive-classification. A video demo is available at https://youtu.be/llub5GcOF6w.
Abstract (translated)
我们提出了一个交互系统,使用户能够操作图像,以探索深度学习图像分类器的鲁棒性和敏感性。使用现代网络技术运行在浏览器中进行推理,用户可以使用绘制算法删除图像功能并实时获得新分类,这允许他们通过实验修改图像并查看模型的反应来询问各种“假设”问题。我们的系统允许用户比较和对比人类和机器学习模型用于分类的图像区域,揭示了一系列令人惊讶的结果,从壮观的失败(例如,当移除一个人时,“水瓶”图像变成“音乐会”)到令人印象深刻的恢复力(例如,“棒球运动员”图像保持不变)。即使没有手套或底座也能正确分类)。我们在2018年计算机视觉和模式识别会议(CVPR)上演示了我们的系统,供观众现场试用。我们的系统是在https://github.com/poloclub/interactive-classification上开源的。可在https://youtu.be/llub5gcof6w上观看视频演示。
URL
https://arxiv.org/abs/1806.05660