Abstract
Significant progress has been made on text generation by pre-trained language models (PLMs), yet distinguishing between human and machine-generated text poses an escalating challenge. This paper offers an in-depth evaluation of three distinct methods used to address this task: traditional shallow learning, Language Model (LM) fine-tuning, and Multilingual Model fine-tuning. These approaches are rigorously tested on a wide range of machine-generated texts, providing a benchmark of their competence in distinguishing between human-authored and machine-authored linguistic constructs. The results reveal considerable differences in performance across methods, thus emphasizing the continued need for advancement in this crucial area of NLP. This study offers valuable insights and paves the way for future research aimed at creating robust and highly discriminative models.
Abstract (translated)
通过预训练语言模型(PLMs)进行文本生成已经取得了显著的进展,然而在区分人类和机器生成的文本方面仍然是一个不断加剧的挑战。本文对解决这个任务的三个不同方法进行了深入评估:传统浅层学习、语言模型(LM)微调、和多语言模型微调。这些方法在广泛的机器生文本上进行了严格的测试,为它们在区分人类和非人类语言构造方面的能力提供了基准。结果表明,在方法之间性能存在很大的差异,因此强调在自然语言处理这个关键领域需要继续推动发展。这项研究提供了宝贵的见解,为未来的研究铺平了道路,旨在创建健壮且高度判别力强的模型。
URL
https://arxiv.org/abs/2311.12373