Abstract
Authorship style transfer aims to rewrite a given text into a specified target while preserving the original meaning in the source. Existing approaches rely on the availability of a large number of target style exemplars for model training. However, these overlook cases where a limited number of target style examples are available. The development of parameter-efficient transfer learning techniques and policy optimization (PO) approaches suggest lightweight PO is a feasible approach to low-resource style transfer. In this work, we propose a simple two step tune-and-optimize technique for low-resource textual style transfer. We apply our technique to authorship transfer as well as a larger-data native language style task and in both cases find it outperforms state-of-the-art baseline models.
Abstract (translated)
翻译:跨作者性风格迁移的目的是将给定的文本转换为指定目标,同时保留原始文本的含义。现有的方法依赖于大量目标风格示例的可用性来进行模型训练。然而,这些方法忽视了目标风格示例数量有限的情况。参数高效的迁移学习技术和策略优化(PO)方法表明,轻量级PO是一种低资源风格迁移的可行方法。在这项工作中,我们提出了一种简单的两步调参和优化技术用于低资源文本风格迁移。我们将我们的技术应用于作者hip转移和大数据本土语言风格任务,并在两种情况下发现它优于最先进的基准模型。
URL
https://arxiv.org/abs/2403.08043