Abstract
In this paper, we propose a robust multilingual model to improve the quality of search results. Our model not only leverage the processed class-balanced dataset, but also benefit from multitask pre-training that leads to more general representations. In pre-training stage, we adopt mlm task, classification task and contrastive learning task to achieve considerably performance. In fine-tuning stage, we use confident learning, exponential moving average method (EMA), adversarial training (FGM) and regularized dropout strategy (R-Drop) to improve the model's generalization and robustness. Moreover, we use a multi-granular semantic unit to discover the queries and products textual metadata for enhancing the representation of the model. Our approach obtained competitive results and ranked top-8 in three tasks. We release the source code and pre-trained models associated with this work.
Abstract (translated)
在本文中,我们提出了一个稳健多语言模型,以提高搜索结果的质量。我们的模型不仅利用处理平衡类数据的 processed class-balanced dataset,而且还从多任务预训练中获得好处,以产生更普遍表示。在预训练阶段,我们采用mlm任务、分类任务和对比学习任务,取得了显著性能。在优化阶段,我们使用自信学习、指数平滑方法(EMA)、对抗训练(FGM)和正则化删除策略(R-Drop)来提高模型的泛化和鲁棒性。此外,我们还使用多粒度语义单元来发现查询和产品文本元数据,以增强模型的表示。我们的方法获得了竞争结果,在三个任务中排名前8。我们发布了与这项工作相关的源代码和预训练模型。
URL
https://arxiv.org/abs/2301.13455