Paper Reading AI Learner

ExClaim: Explainable Neural Claim Verification Using Rationalization

2023-01-21 08:26:27
Sai Gurrapu, Lifu Huang, Feras A. Batarseh

Abstract

With the advent of deep learning, text generation language models have improved dramatically, with text at a similar level as human-written text. This can lead to rampant misinformation because content can now be created cheaply and distributed quickly. Automated claim verification methods exist to validate claims, but they lack foundational data and often use mainstream news as evidence sources that are strongly biased towards a specific agenda. Current claim verification methods use deep neural network models and complex algorithms for a high classification accuracy but it is at the expense of model explainability. The models are black-boxes and their decision-making process and the steps it took to arrive at a final prediction are obfuscated from the user. We introduce a novel claim verification approach, namely: ExClaim, that attempts to provide an explainable claim verification system with foundational evidence. Inspired by the legal system, ExClaim leverages rationalization to provide a verdict for the claim and justifies the verdict through a natural language explanation (rationale) to describe the model's decision-making process. ExClaim treats the verdict classification task as a question-answer problem and achieves a performance of 0.93 F1 score. It provides subtasks explanations to also justify the intermediate outcomes. Statistical and Explainable AI (XAI) evaluations are conducted to ensure valid and trustworthy outcomes. Ensuring claim verification systems are assured, rational, and explainable is an essential step toward improving Human-AI trust and the accessibility of black-box systems.

Abstract (translated)

随着深度学习的兴起,生成语言模型已经取得了显著的进步,生成的文本与人类写作的文本水平相当。这可能导致广泛的虚假信息,因为内容现在可以低成本地创建和快速分发。自动验证索赔的方法存在以验证索赔,但它们缺乏基础数据,常常使用主流新闻作为证据来源,具有强烈偏向特定议程的倾向。当前的方法使用深度神经网络模型和复杂算法来提高分类精度,但它牺牲了模型解释性。模型是黑盒子,它们的决策过程和到达最终预测的步骤对用户是隐藏的。我们介绍了一种新的索赔验证方法,即“ExClaim”,旨在提供具有基础证据解释的可解释的索赔验证系统。受到法律系统的影响,ExClaim利用解释力为索赔提供判决,并通过自然语言解释(解释)来描述模型的决策过程。ExClaim将判决分类任务视为一个问题-回答问题,其表现为0.93 F1得分。它提供子任务解释,也justify中间结果。进行统计和可解释性人工智能(XAI)评估以确保结果有效和可信。确保索赔验证系统可靠、可解释,是提高人机信任和黑盒子系统的可用性的关键步骤。

URL

https://arxiv.org/abs/2301.08914

PDF

https://arxiv.org/pdf/2301.08914.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot