Abstract
Pretrained language models have significantly improved the performance of down-stream tasks, for example extractive question answering, by providing high-quality contextualized word embeddings. However, learning question answering models still need large-scale data annotation in specific domains. In this work, we propose a cooperative, self-play learning model for question generation and answering. We implemented a masked answer entity extraction task with an interactive learning environment, containing a question generator and a question extractor. Given a passage with a mask, a question generator asks a question about the masked entity, meanwhile the extractor is trained to extract the masked entity with the generated question and raw texts. With this strategy, we can train question generation and answering models on any textual corpora without annotation. To further improve the performances of the question answering model, we propose a reinforcement learning method that rewards generated questions that improves the extraction learning. Experimental results showed that our model outperforms the state-of-the-art pretrained language models on standard question answering benchmarks, and reaches the state-of-the-art performance under the zero-shot learning setting.
Abstract (translated)
URL
https://arxiv.org/abs/2103.07449