Abstract
Handwriting Verification is a critical in document forensics. Deep learning based approaches often face skepticism from forensic document examiners due to their lack of explainability and reliance on extensive training data and handcrafted features. This paper explores using Vision Language Models (VLMs), such as OpenAI's GPT-4o and Google's PaliGemma, to address these challenges. By leveraging their Visual Question Answering capabilities and 0-shot Chain-of-Thought (CoT) reasoning, our goal is to provide clear, human-understandable explanations for model decisions. Our experiments on the CEDAR handwriting dataset demonstrate that VLMs offer enhanced interpretability, reduce the need for large training datasets, and adapt better to diverse handwriting styles. However, results show that the CNN-based ResNet-18 architecture outperforms the 0-shot CoT prompt engineering approach with GPT-4o (Accuracy: 70%) and supervised fine-tuned PaliGemma (Accuracy: 71%), achieving an accuracy of 84% on the CEDAR AND dataset. These findings highlight the potential of VLMs in generating human-interpretable decisions while underscoring the need for further advancements to match the performance of specialized deep learning models.
Abstract (translated)
手写字符验证在文件法医学中是一个关键的问题。基于深度学习的技术方法通常会让法医文件审查员对它们的透明度和对广泛训练数据和手动特征的依赖表示怀疑。本文探讨使用像OpenAI的GPT-4o和Google的PaliGemma这样的视觉语言模型(VLMs)来解决这些挑战。通过利用它们的视觉问答能力和零样本链式思维(CoT)推理,我们的目标是为模型决策提供清晰、可解释的人类理解。我们在CEDAR手写数据集上的实验证明,VLMs具有增强的可解释性,减少了大型训练数据集的需求,并更好地适应多样性的手写风格。然而,实验结果表明,基于CNN的ResNet-18架构在GPT-4o(准确率:70%)和监督微调的PaliGemma(准确率:71%)的零样本CoT提示工程方法上胜出,达到CEDAR AND数据集上的84%准确率。这些发现突出了VLMs在生成可解释决策方面的潜力,同时强调了需要进一步改进以匹配专业深度学习模型的性能。
URL
https://arxiv.org/abs/2407.21788