Abstract
Textual information found in scene images provides high level semantic information about the image and its context and it can be leveraged for better scene understanding. In this paper we address the problem of scene text retrieval: given a text query, the system must return all images containing the queried text. The novelty of the proposed model consists in the usage of a single shot CNN architecture that predicts at the same time bounding boxes and a compact text representation of the words in them. In this way, the text based image retrieval task can be casted as a simple nearest neighbor search of the query text representation over the outputs of the CNN over the entire image database. Our experiments demonstrate that the proposed architecture outperforms previous state-of-the-art while it offers a significant increase in processing speed.
Abstract (translated)
在场景图像中找到的文本信息提供关于图像及其上下文的高级语义信息,并且可以利用它来更好地理解场景。在本文中,我们解决了场景文本检索的问题:给定文本查询,系统必须返回包含查询文本的所有图像。所提出的模型的新颖性在于使用单射CNN架构,其同时预测边界框和其中的单词的紧凑文本表示。以这种方式,基于文本的图像检索任务可以被铸造为在整个图像数据库上CNN的输出上的查询文本表示的简单最近邻搜索。我们的实验表明,所提出的架构优于以前的先进技术,同时它提供了显着的处理速度。
URL
https://arxiv.org/abs/1808.09044