Abstract
Learning rich and compact representations is an open topic in many fields such as object recognition or image retrieval. Deep neural networks have made a major breakthrough during the last few years for these tasks but their representations are not necessary as rich as needed nor as compact as expected. To build richer representations, high order statistics have been exploited and have shown excellent performances, but they produce higher dimensional features. While this drawback has been partially addressed with factorization schemes, the original compactness of first order models has never been retrieved, or at the cost of a strong performance decrease. Our method, by jointly integrating codebook strategy to factorization scheme, is able to produce compact representations while keeping the second order performances with few additional parameters. This formulation leads to state-of-the-art results on three image retrieval datasets.
Abstract (translated)
学习丰富而紧凑的表示是许多领域的一个开放主题,如对象识别或图像检索。在过去的几年中,深度神经网络在这些任务中取得了重大突破,但它们的表示并不像预期的那样丰富和紧凑。为了建立更丰富的表示,高阶统计量已经被开发出来并显示出良好的性能,但是它们产生了更高的维度特征。虽然因子分解方案已经部分解决了这个缺点,但是一阶模型的原始紧度从未被检索到,或者以性能下降为代价。该方法将码本策略与因式分解方案相结合,在保持二阶性能的同时,能产生简洁的表示,且附加参数较少。这一公式导致了三个图像检索数据集的最新结果。
URL
https://arxiv.org/abs/1906.01972