Paper Reading AI Learner

Automatically Extracting Numerical Results from Randomized Controlled Trials with Large Language Models

2024-05-02 19:20:11
Hye Sun Yun, David Pogrebitskiy, Iain J. Marshall, Byron C. Wallace

Abstract

Meta-analyses statistically aggregate the findings of different randomized controlled trials (RCTs) to assess treatment effectiveness. Because this yields robust estimates of treatment effectiveness, results from meta-analyses are considered the strongest form of evidence. However, rigorous evidence syntheses are time-consuming and labor-intensive, requiring manual extraction of data from individual trials to be synthesized. Ideally, language technologies would permit fully automatic meta-analysis, on demand. This requires accurately extracting numerical results from individual trials, which has been beyond the capabilities of natural language processing (NLP) models to date. In this work, we evaluate whether modern large language models (LLMs) can reliably perform this task. We annotate (and release) a modest but granular evaluation dataset of clinical trial reports with numerical findings attached to interventions, comparators, and outcomes. Using this dataset, we evaluate the performance of seven LLMs applied zero-shot for the task of conditionally extracting numerical findings from trial reports. We find that massive LLMs that can accommodate lengthy inputs are tantalizingly close to realizing fully automatic meta-analysis, especially for dichotomous (binary) outcomes (e.g., mortality). However, LLMs -- including ones trained on biomedical texts -- perform poorly when the outcome measures are complex and tallying the results requires inference. This work charts a path toward fully automatic meta-analysis of RCTs via LLMs, while also highlighting the limitations of existing models for this aim.

Abstract (translated)

元分析是一种统计方法,将不同随机对照试验(RCTs)的研究结果汇总以评估治疗效果。由于这导致了治疗有效性的稳健估计,因此元分析的结果被认为是最好的证据。然而,严谨的证据综述需要花费时间和精力,并需要从每个试验中手动提取数据进行合成。理想情况下,语言技术应该允许完全自动的元分析,即在需要时按需进行。这需要准确从每个试验中提取数值结果,而目前自然语言处理(NLP)模型还无法实现这一目标。在这项工作中,我们评估现代大型语言模型(LLMs)是否可以可靠地执行这项任务。我们注释并发布了一个包含数字结果附加到干预、比较项目和结局的临床试验报告的适度但粗粒度评估数据集。使用这个数据集,我们评估了七种LLM应用于零散元分析在从试验报告中提取数字结果方面的性能。我们发现,具有长输入的大型LLM距离实现完全自动元分析(特别是二分类(二元)结果)非常接近。然而,LLM--包括那些基于生物医学文本训练的模型--在处理复杂结局时表现不佳,需要进行推断。这项工作为通过LLM实现完全自动元分析RCT奠定了道路,同时强调了现有模型的局限性。

URL

https://arxiv.org/abs/2405.01686

PDF

https://arxiv.org/pdf/2405.01686.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot