Abstract
Educators are increasingly concerned about the usage of Large Language Models (LLMs) such as ChatGPT in programming education, particularly regarding the potential exploitation of imperfections in Artificial Intelligence Generated Content (AIGC) Detectors for academic misconduct. In this paper, we present an empirical study where the LLM is examined for its attempts to bypass detection by AIGC Detectors. This is achieved by generating code in response to a given question using different variants. We collected a dataset comprising 5,069 samples, with each sample consisting of a textual description of a coding problem and its corresponding human-written Python solution codes. These samples were obtained from various sources, including 80 from Quescol, 3,264 from Kaggle, and 1,725 from LeetCode. From the dataset, we created 13 sets of code problem variant prompts, which were used to instruct ChatGPT to generate the outputs. Subsequently, we assessed the performance of five AIGC detectors. Our results demonstrate that existing AIGC Detectors perform poorly in distinguishing between human-written code and AI-generated code.
Abstract (translated)
教育工作者越来越担心大型语言模型(LLMs)如ChatGPT在编程教育中的应用,尤其是在人工智能生成内容(AIGC)检测器可能被用于学术不端行为的情况下。在本文中,我们进行了一项实证研究,研究了LLM在试图绕过AIGC检测器检测方面的行为。这是通过使用不同的变体生成代码来实现的。我们收集了一个由5,069个样本组成的 dataset,每个样本包括一个编程问题的文本描述及其相应的人工编写Python解决方案代码。这些样本来自各种来源,包括 80 个来自 Quescol,3,264 个来自 Kaggle,和 1,725 个来自 LeetCode。从数据集中,我们创建了13组代码问题变体提示,用于指导ChatGPT生成输出。随后,我们评估了五种AIGC检测器的性能。我们的结果表明,现有的AIGC检测器在区分人工编写代码和AI生成的代码方面表现不佳。
URL
https://arxiv.org/abs/2401.03676