Abstract
We conducted a survey of 135 software engineering (SE) practitioners to understand how they use Generative AI-based chatbots like ChatGPT for SE tasks. We find that they want to use ChatGPT for SE tasks like software library selection but often worry about the truthfulness of ChatGPT responses. We developed a suite of techniques and a tool called CID (ChatGPT Incorrectness Detector) to automatically test and detect the incorrectness in ChatGPT responses. CID is based on the iterative prompting to ChatGPT by asking it contextually similar but textually divergent questions (using an approach that utilizes metamorphic relationships in texts). The underlying principle in CID is that for a given question, a response that is different from other responses (across multiple incarnations of the question) is likely an incorrect response. In a benchmark study of library selection, we show that CID can detect incorrect responses from ChatGPT with an F1-score of 0.74 - 0.75.
Abstract (translated)
我们对135名软件工程(SE)从业者进行了调查,以了解他们如何使用基于生成式人工智能(GAN)的聊天机器人如ChatGPT来完成SE任务。我们发现,他们希望使用ChatGPT来完成软件库选择等SE任务,但通常担心ChatGPT的回答是否真实。为了自动测试和检测ChatGPT回答中的错误,我们开发了一套技术和一个名为CID(ChatGPT Incorrectness Detector)的工具。CID基于向ChatGPT提出具有上下文相似但文本上不同的提问(使用文本中形态关系的 approach)来逐步提示ChatGPT。CID的原则是,对于给定的问题,与问题其他回答( across multiple question versions)不同的响应很可能是一个错误的回答。在图书馆选择基准研究中,我们证明了CID可以检测到ChatGPT的错误回答,其F1分数为0.74 - 0.75。
URL
https://arxiv.org/abs/2403.16347