Paper Reading AI Learner

The False Promise of Imitating Proprietary LLMs

2023-05-25 05:00:12
Arnav Gudibande, Eric Wallace, Charlie Snell, Xinyang Geng, Hao Liu, Pieter Abbeel, Sergey Levine, Dawn Song

Abstract

An emerging method to cheaply improve a weaker language model is to finetune it on outputs from a stronger model, such as a proprietary system like ChatGPT (e.g., Alpaca, Self-Instruct, and others). This approach looks to cheaply imitate the proprietary model's capabilities using a weaker open-source model. In this work, we critically analyze this approach. We first finetune a series of LMs that imitate ChatGPT using varying base model sizes (1.5B--13B), data sources, and imitation data amounts (0.3M--150M tokens). We then evaluate the models using crowd raters and canonical NLP benchmarks. Initially, we were surprised by the output quality of our imitation models -- they appear far better at following instructions, and crowd workers rate their outputs as competitive with ChatGPT. However, when conducting more targeted automatic evaluations, we find that imitation models close little to none of the gap from the base LM to ChatGPT on tasks that are not heavily supported in the imitation data. We show that these performance discrepancies may slip past human raters because imitation models are adept at mimicking ChatGPT's style but not its factuality. Overall, we conclude that model imitation is a false promise: there exists a substantial capabilities gap between open and closed LMs that, with current methods, can only be bridged using an unwieldy amount of imitation data or by using more capable base LMs. In turn, we argue that the highest leverage action for improving open-source models is to tackle the difficult challenge of developing better base LMs, rather than taking the shortcut of imitating proprietary systems.

Abstract (translated)

一种新兴的方法以低成本改善较弱的语言模型是优化来自更强大模型的输出,例如像ChatGPT这样的专有系统(例如Alpaca、Self-Instruct、和其他人)的方法。这种方法旨在使用较弱的开源模型模仿专有模型的能力。在这项工作中,我们 critically 分析了这种方法。我们首先优化了一系列模仿ChatGPT的LMs,使用不同的基模型大小(1.5B--13B)、数据源和模仿数据量(0.3M--150M代币)。然后我们使用 crowd raters 和标准的NLP基准测试评估模型。一开始,我们对我们的模仿模型的输出质量感到惊讶——它们似乎更好地遵循指令, crowd 工作者评估它们的输出与ChatGPT相当。但是,在更有针对性的自动评估中,我们发现模仿模型几乎在基LM与ChatGPT的任务上没有填补差距,我们表明,这些表现差异可能躲过人类 raters 的原因是因为模仿模型擅长模仿ChatGPT的风格,但不擅长其事实。总的来说,我们得出结论,模型模仿是一种虚假的承诺:开源和闭源LM之间存在巨大的能力差距,目前的方法只能使用大量的模仿数据或使用更强大的基LMs 来填补差距。我们提出,改善开源模型的最高 leverage 行动是解决开发更好的基LMs 的艰难挑战,而不是使用模仿专有系统的捷径。

URL

https://arxiv.org/abs/2305.15717

PDF

https://arxiv.org/pdf/2305.15717.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot