Abstract
Large language models (LLMs) have demonstrated impressive generalization capabilities on specific tasks with human-written instruction data. However, the limited quantity, diversity, and professional expertise of such instruction data raise concerns about the performance of LLMs in psychotherapy tasks when provided with domain-specific instructions. To address this, we firstly propose Domain-Specific Assistant Instructions based on AlexanderStreet therapy, and secondly, we use an adaption fine-tuning method and retrieval augmented generation method to improve pre-trained LLMs. Through quantitative evaluation of linguistic quality using automatic and human evaluation, we observe that pre-trained LLMs on Psychotherapy Assistant Instructions outperform state-of-the-art LLMs response baselines. Our Assistant-Instruction approach offers a half-annotation method to align pre-trained LLMs with instructions and provide pre-trained LLMs with more psychotherapy knowledge.
Abstract (translated)
大语言模型(LLMs)已经在特定任务上展示了令人印象深刻的泛化能力,这些任务使用人类编写的指令数据。然而,这种指令数据的数量有限,多样性较小,专业能力有限,这使得当LLMs获得特定领域的指导时,在心理治疗任务上的表现引起了人们的担忧。为解决这个问题,我们首先提出了基于AlexanderStreet治疗的领域特定辅助指令,然后使用自监督和人工评估来改进预训练LLMs。通过使用自动和人工评估对语言质量进行定量评估,我们观察到,使用心理治疗助手指令预训练的LLMs超越了最先进的LLMs响应基线。我们的辅助指令方法提供了一种半注释方法,使预训练的LLM与指令对齐,并为预训练的LLM提供更多的心理治疗知识。
URL
https://arxiv.org/abs/2404.16160