Abstract
Automated text generation has been applied broadly in many domains such as marketing and robotics, and used to create chatbots, product reviews and write poetry. The ability to synthesize text, however, presents many potential risks, while access to the technology required to build generative models is becoming increasingly easy. This work is aligned with the efforts of the United Nations and other civil society organisations to highlight potential political and societal risks arising through the malicious use of text generation software, and their potential impact on human rights. As a case study, we present the findings of an experiment to generate remarks in the style of political leaders by fine-tuning a pretrained AWD- LSTM model on a dataset of speeches made at the UN General Assembly. This work highlights the ease with which this can be accomplished, as well as the threats of combining these techniques with other technologies.
Abstract (translated)
自动文本生成已广泛应用于市场营销和机器人技术等许多领域,并用于创建聊天机器人、产品评论和写诗。然而,合成文本的能力带来了许多潜在的风险,而获取构建生成模型所需的技术变得越来越容易。这项工作与联合国和其他民间社会组织的努力一致,以强调恶意使用文本生成软件可能带来的政治和社会风险及其对人权的潜在影响。作为一个案例研究,我们提出了一个实验的结果,通过在联合国大会上发表的演讲集上微调一个预先培训的awd-lstm模型,产生政治领导人风格的言论。这项工作强调了实现这一目标的容易程度,以及将这些技术与其他技术结合起来的威胁。
URL
https://arxiv.org/abs/1906.01946