Abstract
The utilization of programming language (PL) models, pretrained on large-scale code corpora, as a means of automating software engineering processes has demonstrated considerable potential in streamlining various code generation tasks such as code completion, code translation, and program synthesis. However, current approaches mainly rely on supervised fine-tuning objectives borrowed from text generation, neglecting specific sequence-level features of code, including but not limited to compilability as well as syntactic and functional correctness. To address this limitation, we propose PPOCoder, a new framework for code generation that combines pretrained PL models with Proximal Policy Optimization (PPO) deep reinforcement learning and employs execution feedback as the external source of knowledge into the model optimization. PPOCoder is transferable across different code generation tasks and PLs. Extensive experiments on three code generation tasks demonstrate the effectiveness of our proposed approach compared to SOTA methods, improving the success rate of compilation and functional correctness over different PLs. Our code can be found at this https URL .
Abstract (translated)
利用编程语言(PL)模型,在大规模代码库中训练,作为自动化软件工程过程的手段,已经展示了在简化各种代码生成任务(如代码完成、代码翻译和程序合成)方面相当大的潜力。然而,当前的方法主要依赖于从文本生成中提取的监督微调目标,忽略了代码的特定序列级特征,包括但不仅限于编译性和语法正确性。为了克服这种限制,我们提出了PPOCoder,一个新的代码生成框架,它将训练好的PL模型与远程策略优化(PPO)深度强化学习相结合,并使用执行反馈作为模型优化的外部知识来源。PPOCoder可以跨不同的代码生成任务和PL进行转移。对三个代码生成任务进行了广泛的实验,证明了我们提出的方法和SOTA方法相比的效力,并提高了不同PL的编译和功能正确性成功率。我们的代码可以在这个httpsURL上找到。
URL
https://arxiv.org/abs/2301.13816