Abstract
In approximate nearest neighbor search (ANNS) methods based on approximate proximity graphs, DiskANN achieves good recall-speed balance for large-scale datasets using both of RAM and storage. Despite it claims to save memory usage by loading compressed vectors by product quantization (PQ), its memory usage increases in proportion to the scale of datasets. In this paper, we propose All-in-Storage ANNS with Product Quantization (AiSAQ), which offloads the compressed vectors to storage. Our method achieves $\sim$10 MB memory usage in query search even with billion-scale datasets with minor performance degradation. AiSAQ also reduces the index load time before query search, which enables the index switch between muitiple billion-scale datasets and significantly enhances the flexibility of retrieval-augmented generation (RAG). This method is applicable to all graph-based ANNS algorithms and can be combined with higher-spec ANNS methods in the future.
Abstract (translated)
基于近邻图的近邻搜索(ANNS)方法中,DiskANN在RAM和存储上均取得了良好的召回速度平衡,适用于大规模数据集。尽管它声称通过产品量化(PQ)加载压缩向量来节省内存使用量,但数据集规模越大,其内存使用量增加的比例就越大。在本文中,我们提出了All-in-Storage ANNS with Product Quantization(AiSAQ)方法,将压缩向量卸载到存储器中。我们的方法在即使有亿规模数据集的情况下,查询搜索的内存使用量也约为10MB,且性能略有下降。AiSAQ还减少了查询搜索前的索引负载时间,使得索引可以在多个亿规模数据集之间进行切换,从而显著增强了检索增强生成(RAG)的灵活性。这种方法适用于所有基于图的ANNS算法,未来可以与高规格的ANNS方法相结合。
URL
https://arxiv.org/abs/2404.06004