Abstract
This study aims to innovatively explore adaptive applications of large language models (LLM) in urban renewal. It also aims to improve its performance and text generation quality for knowledge question-answering (QA) tasks. Based on the ChatGLM, we automatically generate QA datasets using urban renewal scientific literature corpora in a self-instruct manner and then conduct joint fine-tuning training on the model using the Prefix and LoRA fine-tuning methods to create an LLM for urban renewal. By guiding the LLM to automatically generate QA data based on prompt words and given text, it is possible to quickly obtain datasets in the urban renewal field and provide data support for the fine-tuning training of LLMs. The experimental results show that the joint fine-tuning training method proposed in this study can significantly improve the performance of LLM on the QA tasks. Compared with LoRA fine-tuning, the method improves the Bleu and Rouge metrics on the test by about 5%; compared with the model before fine-tuning, the method improves the Bleu and Rouge metrics by about 15%-20%. This study demonstrates the effectiveness and superiority of the joint fine-tuning method using Prefix and LoRA for ChatGLM in the urban renewal knowledge QA tasks. It provides a new approach for fine-tuning LLMs on urban renewal-related tasks.
Abstract (translated)
本研究旨在创新地探讨大型语言模型(LLM)在 urban renewal 中的自适应应用。它还旨在提高知识问题回答(QA)任务的性能和文本生成质量。基于 ChatGLM,我们通过自我指导的方式使用城市更新科学文献集成生成 QA 数据,然后使用前缀和 LoRA 微调方法对模型进行联合微调,以创建一个用于 urban renewal 的 LLM。通过引导 LLM 根据提示词和给定文本自动生成 QA 数据,可以快速获得城市更新领域的数据,并为 LLM 的微调训练提供数据支持。实验结果表明,本研究中提出的联合微调方法可以显著提高 LLM 在 QA 任务上的性能。与 LoRA 微调相比,该方法在测试中提高了约 5%的 Bleu 和 Red 分数;与未经微调的模型相比,该方法提高了约 15% 至 20%的 Bleu 和 Red 分数。本研究展示了使用 Prefix 和 LoRA 对 ChatGLM 在城市更新知识 QA 任务中进行联合微调的有效性和优越性。它为在 urban renewal 相关任务上微调 LLM 提供了新的方法。
URL
https://arxiv.org/abs/2311.15490