Abstract
PLAID, an efficient implementation of the ColBERT late interaction bi-encoder using pretrained language models for ranking, consistently achieves state-of-the-art performance in monolingual, cross-language, and multilingual retrieval. PLAID differs from ColBERT by assigning terms to clusters and representing those terms as cluster centroids plus compressed residual vectors. While PLAID is effective in batch experiments, its performance degrades in streaming settings where documents arrive over time because representations of new tokens may be poorly modeled by the earlier tokens used to select cluster centroids. PLAID Streaming Hierarchical Indexing that Runs on Terabytes of Temporal Text (PLAID SHIRTTT) addresses this concern using multi-phase incremental indexing based on hierarchical sharding. Experiments on ClueWeb09 and the multilingual NeuCLIR collection demonstrate the effectiveness of this approach both for the largest collection indexed to date by the ColBERT architecture and in the multilingual setting, respectively.
Abstract (translated)
PLAID是一种高效实现ColBERT late interaction bi-encoder的预训练语言模型用于排序,在单语种、跨语言和多语言检索中始终实现最先进的性能。PLAID与ColBERT的区别在于,它将词分配给簇,并将这些词表示为簇中心加压缩残余向量。虽然PLAID在批处理实验中非常有效,但在流式设置中,其性能会因为早期选定的簇中心表示不佳而下降。PLAID基于分层分区的多阶段索引在Terabytes of Temporal Text (PLAID SHIRTTT)上运行解决了这个问题。ClueWeb09和多语言NeuCLIR收藏库的实验表明,这种方法在当前由ColBERT架构编写的最大索引集中的单语种和多语言检索中都是有效的。
URL
https://arxiv.org/abs/2405.00975