Abstract
The burgeoning volume of digital content across diverse modalities necessitates efficient storage and retrieval methods. Conventional approaches struggle to cope with the escalating complexity and scale of multimedia data. In this paper, we proposed framework addresses this challenge by fusing AI-native multi-modal search capabilities with neural image compression. First we analyze the intricate relationship between compressibility and searchability, recognizing the pivotal role each plays in the efficiency of storage and retrieval systems. Through the usage of simple adapter is to bridge the feature of Learned Image Compression(LIC) and Contrastive Language-Image Pretraining(CLIP) while retaining semantic fidelity and retrieval of multi-modal data. Experimental evaluations on Kodak datasets demonstrate the efficacy of our approach, showcasing significant enhancements in compression efficiency and search accuracy compared to existing methodologies. Our work marks a significant advancement towards scalable and efficient multi-modal search systems in the era of big data.
Abstract (translated)
数字内容的爆发式增长对高效存储和检索方法提出了需求。传统的解决方案很难应对多媒体数据的日益复杂和规模。在本文中,我们提出的框架通过将人工智能原生多模态搜索功能与神经图像压缩相结合来应对这一挑战。首先,我们分析了压缩性和搜索性之间的复杂关系,认识到每个在存储和检索系统的效率中都扮演着关键角色。通过使用简单的适配器来桥接Learned Image Compression(LIC)和Contrastive Language-Image Pre-training(CLIP)的特征,同时保留语义保真度和多模态数据的检索,我们提出了一种方法。在柯达数据集的实验评估中,我们展示了我们方法的有效性,显示了与现有方法相比,压缩效率和搜索精度都有显著提高。我们的工作在大型数据时代的可扩展和高效多模态搜索系统方面迈出了重要的一步。
URL
https://arxiv.org/abs/2404.10234