Abstract
Whereas much of the success of the current generation of neural language models has been driven by increasingly large training corpora, relatively little research has been dedicated to analyzing these massive sources of textual data. In this exploratory analysis, we delve deeper into the Common Crawl, a colossal web corpus that is extensively used for training language models. We find that it contains a significant amount of undesirable content, including hate speech and sexually explicit content, even after filtering procedures. We conclude with a discussion of the potential impacts of this content on language models and call for more mindful approach to corpus collection and analysis.
Abstract (translated)
URL
https://arxiv.org/abs/2105.02732