Abstract
In the burgeoning field of Large Language Models (LLMs) like ChatGPT and LLaMA, Prompt Engineering (PE) is renowned for boosting zero-shot or in-context learning (ICL) through prompt modifications. Yet, the realm of the sample design for downstream fine-tuning, crucial for task-specific LLM adaptation, is largely unexplored. This paper introduces Sample Design Engineering (SDE), a methodical approach to enhancing LLMs' post-tuning performance by refining input, output, and reasoning designs. We conduct a series of in-domain (ID) and out-of-domain (OOD) experiments to assess the impact of various design options on LLMs' downstream performance, revealing several intriguing patterns that hold consistently across different LLMs. Based on these insights, we propose an integrated SDE strategy, combining the most effective options, and validate its consistent superiority over heuristic sample designs in complex downstream tasks like multi-aspect sentiment analysis, event extraction, and nested entity recognition. Additionally, analyses of LLMs' inherent prompt/output perplexity, zero-shot, and ICL abilities illustrate that good PE strategies may not always translate to good SDE strategies. Code available at this https URL.
Abstract (translated)
在蓬勃发展的自然语言处理领域(如ChatGPT和LLLaMAs)中,提示工程(PE)以其通过提示修改提高零 shot 或上下文学习(ICL)的显著性而闻名。然而,对于下游微调范式(微调是特定任务适应LLM的关键)的样本设计领域,开发仍然具有很大的潜力。本文介绍了一种称为Sample Design Engineering(SDE)的方法,通过优化输入、输出和推理设计来提高LLMs的微调性能。我们进行了一系列内部(ID)和外部(OO)实验,评估各种设计选项对LLMs下游性能的影响,揭示了一些在各种LLM上保持一致的有趣模式。基于这些洞见,我们提出了一个集成的SDE策略,结合了最有效的选项,并在复杂的多方面情感分析、事件提取和嵌套实体识别等下游任务中验证了其的一致优越性。此外,LLMs固有的提示/输出谜团、零 shot和ICL能力分析表明,好的PE策略可能不总是转化为好的SDE策略。代码位于此链接处:https://www.aclweb.org/anthology/N22-3630
URL
https://arxiv.org/abs/2404.13033