Paper Reading AI Learner

Abstract Syntax Tree for Programming Language Understanding and Representation: How Far Are We?

2023-12-01 08:37:27
Weisong Sun, Chunrong Fang, Yun Miao, Yudu You, Mengzhe Yuan, Yuchen Chen, Quanjun Zhang, An Guo, Xiang Chen, Yang Liu, Zhenyu Chen

Abstract

Programming language understanding and representation (a.k.a code representation learning) has always been a hot and challenging task in software engineering. It aims to apply deep learning techniques to produce numerical representations of the source code features while preserving its semantics. These representations can be used for facilitating subsequent code-related tasks. The abstract syntax tree (AST), a fundamental code feature, illustrates the syntactic information of the source code and has been widely used in code representation learning. However, there is still a lack of systematic and quantitative evaluation of how well AST-based code representation facilitates subsequent code-related tasks. In this paper, we first conduct a comprehensive empirical study to explore the effectiveness of the AST-based code representation in facilitating follow-up code-related tasks. To do so, we compare the performance of models trained with code token sequence (Token for short) based code representation and AST-based code representation on three popular types of code-related tasks. Surprisingly, the overall quantitative statistical results demonstrate that models trained with AST-based code representation consistently perform worse across all three tasks compared to models trained with Token-based code representation. Our further quantitative analysis reveals that models trained with AST-based code representation outperform models trained with Token-based code representation in certain subsets of samples across all three tasks. We also conduct comprehensive experiments to evaluate and reveal the impact of the choice of AST parsing/preprocessing/encoding methods on AST-based code representation and subsequent code-related tasks. Our study provides future researchers with detailed guidance on how to select solutions at each stage to fully exploit AST.

Abstract (translated)

编程语言理解和表示(也称为代码表示学习)一直是软件工程中的一个热门而具有挑战性的任务。它旨在将深度学习技术应用于生成源代码特征的同时保留其语义。这些表示可用于促进后续的代码相关任务。抽象语法树(AST)是一个基本代码特征,它概述了源代码的语义信息,并在代码表示学习中得到了广泛应用。然而,在如何全面和定量地评估 AST 基于代码表示如何促进后续代码相关任务方面,仍存在缺乏系统性和定量性的评估。在本文中,我们首先进行了全面实证研究,以探讨 AST 基于代码表示在促进后续代码相关任务方面的效果。为此,我们将基于代码标记序列(标记短)的模型训练结果与基于 AST 的代码表示和基于 Token 的代码表示模型在三种流行的代码相关任务上进行比较。令人惊讶的是, overall 的定量统计结果表明,基于 AST 的代码表示模型在所有三个任务上都比基于 Token 的代码表示模型表现得更差。我们的进一步定量分析显示,在所有三个任务中,基于 AST 的代码表示模型在某些数据子集上优于基于 Token 的代码表示模型。我们还进行了全面实验,以评估并揭示选择 AST 解析/预处理/编码方法对基于 AST 的代码表示和后续代码相关任务的影响。我们的研究为未来的研究者提供了在每个阶段选择解决方案以充分利用 AST 的详细指导。

URL

https://arxiv.org/abs/2312.00413

PDF

https://arxiv.org/pdf/2312.00413.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot