Paper Reading AI Learner

Evaluating Consistency and Reasoning Capabilities of Large Language Models

2024-04-25 10:03:14
Yash Saxena, Sarthak Chopra, Arunendra Mani Tripathi

Abstract

Large Language Models (LLMs) are extensively used today across various sectors, including academia, research, business, and finance, for tasks such as text generation, summarization, and translation. Despite their widespread adoption, these models often produce incorrect and misleading information, exhibiting a tendency to hallucinate. This behavior can be attributed to several factors, with consistency and reasoning capabilities being significant contributors. LLMs frequently lack the ability to generate explanations and engage in coherent reasoning, leading to inaccurate responses. Moreover, they exhibit inconsistencies in their outputs. This paper aims to evaluate and compare the consistency and reasoning capabilities of both public and proprietary LLMs. The experiments utilize the Boolq dataset as the ground truth, comprising questions, answers, and corresponding explanations. Queries from the dataset are presented as prompts to the LLMs, and the generated responses are evaluated against the ground truth answers. Additionally, explanations are generated to assess the models' reasoning abilities. Consistency is evaluated by repeatedly presenting the same query to the models and observing for variations in their responses. For measuring reasoning capabilities, the generated explanations are compared to the ground truth explanations using metrics such as BERT, BLEU, and F-1 scores. The findings reveal that proprietary models generally outperform public models in terms of both consistency and reasoning capabilities. However, even when presented with basic general knowledge questions, none of the models achieved a score of 90\% in both consistency and reasoning. This study underscores the direct correlation between consistency and reasoning abilities in LLMs and highlights the inherent reasoning challenges present in current language models.

Abstract (translated)

大规模语言模型(LLMs)如今在学术界、研究、商业和金融等各个领域得到了广泛应用,用于诸如文本生成、总结和翻译等任务。尽管它们已经得到了普遍的采用,但这些模型往往会产生错误或误导性的信息,表现出一种幻觉倾向。这种行为可以归因于几个因素,其中一致性和推理能力是重要的因素。LLMs常常缺乏生成解释和进行合乎理性的推理的能力,导致不准确的回答。此外,它们在输出上表现出不一致性。本文旨在评估和比较公共和专有LLMs的一致性和推理能力。实验使用了Boolq数据集作为基线,包括问题、答案和相应的解释。数据集中的问题作为提示呈现在LLMs上,生成的答案与基线答案进行比较。此外,还生成了推理能力来评估模型的能力。一致性是通过反复向模型呈现相同的问题并观察其响应的变化来评估的。为了衡量推理能力,生成的解释与基线解释使用BERT、BLEU和F-1分数进行比较。研究结果表明,专有模型在一致性和推理能力方面通常优于公共模型。然而,即使面对基本的通用知识问题,没有一个模型在一致性和推理能力上获得了90%的分数。这项研究突出了LLMs的一致性和推理能力与当前语言模型的固有推理挑战之间的直接关系,并强调了当前语言模型中存在的固有推理挑战。

URL

https://arxiv.org/abs/2404.16478

PDF

https://arxiv.org/pdf/2404.16478.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot