Paper Reading AI Learner

Understanding Programs by Exploiting Test Cases

2023-05-23 01:51:46
Jianyu Zhao, Yuyang Rong, Yiwen Guo, Yifeng He, Hao Chen

Abstract

Semantic understanding of programs has attracted great attention in the community. Inspired by recent successes of large language models (LLMs) in natural language understanding, tremendous progress has been made by treating programming language as another sort of natural language and training LLMs on corpora of program code. However, programs are essentially different from texts after all, in a sense that they are normally heavily structured and syntax-strict. In particular, programs and their basic units (i.e., functions and subroutines) are designed to demonstrate a variety of behaviors and/or provide possible outputs, given different inputs. The relationship between inputs and possible outputs/behaviors represents the functions/subroutines and profiles the program as a whole. Therefore, we propose to incorporate such a relationship into learning, for achieving a deeper semantic understanding of programs. To obtain inputs that are representative enough to trigger the execution of most part of the code, we resort to fuzz testing and propose fuzz tuning to boost the performance of program understanding and code representation learning, given a pre-trained LLM. The effectiveness of the proposed method is verified on two program understanding tasks including code clone detection and code classification, and it outperforms current state-of-the-arts by large margins. Code is available at this https URL.

Abstract (translated)

在社区中,对程序语义理解引起了广泛关注。受到大型语言模型(LLM)在自然语言理解方面的 recent 成功启发,我们采取了另一种方式来将编程语言视为自然语言,并在程序代码 corpora 上训练了LLM。然而,程序与文本本质上是不同的,因为它们通常被严重结构和语法严格规定。特别是,程序及其基本单元(即函数和子程序)旨在以不同输入为例,展示各种行为或提供可能的输出。输入与可能的输出/行为之间的关系代表了函数/子程序,并描述了整个程序的特征。因此,我们建议将这种关系纳入学习,以更深入地理解程序。为了获得足以触发大部分代码执行的代表性输入,我们采取了fuzz测试,并提出fuzz调整来提高程序理解和代码表示学习的性能。我们验证了该方法在两个程序理解任务中的有效性,包括代码克隆检测和代码分类,并且它显著超越了当前的前沿水平。代码可在 this https URL 中找到。

URL

https://arxiv.org/abs/2305.13592

PDF

https://arxiv.org/pdf/2305.13592.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot