Abstract
Text style transfer is an exciting task within the field of natural language generation that is often plagued by the need for high-quality paired datasets. Furthermore, training a model for multi-attribute text style transfer requires datasets with sufficient support across all combinations of the considered stylistic attributes, adding to the challenges of training a style transfer model. This paper explores the impact of training data input diversity on the quality of the generated text from the multi-style transfer model. We construct a pseudo-parallel dataset by devising heuristics to adjust the style distribution in the training samples. We balance our training dataset using marginal and joint distributions to train our style transfer models. We observe that a balanced dataset produces more effective control effects over multiple styles than an imbalanced or skewed one. Through quantitative analysis, we explore the impact of multiple style distributions in training data on style-transferred output. These findings will better inform the design of style-transfer datasets.
Abstract (translated)
文本风格转移是自然语言生成领域中令人兴奋的任务,往往需要高质量的配对数据集。此外,训练一个多属性文本风格转移模型需要支持所有考虑风格属性组合的足够数据的dataset,增加了训练风格转移模型的挑战。本文探讨了训练数据输入多样性对多风格转移模型生成文本质量的影响。我们通过设计启发式来调整训练样本的风格分布,构建了一个伪并行数据集。我们使用边际和联合分布来平衡训练数据集,训练我们的风格转移模型。我们观察到,一个平衡的数据集对多个风格之间的控制效果比一个不平衡或偏斜的数据集更有效。通过量化分析,我们探讨了训练数据中多个风格分布对风格转移输出的影响。这些发现将更好地指导风格转移数据集的设计。
URL
https://arxiv.org/abs/2305.15582