Paper Reading AI Learner

Exploiting Object-based and Segmentation-based Semantic Features for Deep Learning-based Indoor Scene Classification

2024-04-11 13:37:51
Ricardo Pereira, Luís Garrote, Tiago Barros, Ana Lopes, Urbano J. Nunes

Abstract

Indoor scenes are usually characterized by scattered objects and their relationships, which turns the indoor scene classification task into a challenging computer vision task. Despite the significant performance boost in classification tasks achieved in recent years, provided by the use of deep-learning-based methods, limitations such as inter-category ambiguity and intra-category variation have been holding back their performance. To overcome such issues, gathering semantic information has been shown to be a promising source of information towards a more complete and discriminative feature representation of indoor scenes. Therefore, the work described in this paper uses both semantic information, obtained from object detection, and semantic segmentation techniques. While object detection techniques provide the 2D location of objects allowing to obtain spatial distributions between objects, semantic segmentation techniques provide pixel-level information that allows to obtain, at a pixel-level, a spatial distribution and shape-related features of the segmentation categories. Hence, a novel approach that uses a semantic segmentation mask to provide Hu-moments-based segmentation categories' shape characterization, designated by Segmentation-based Hu-Moments Features (SHMFs), is proposed. Moreover, a three-main-branch network, designated by GOS$^2$F$^2$App, that exploits deep-learning-based global features, object-based features, and semantic segmentation-based features is also proposed. GOS$^2$F$^2$App was evaluated in two indoor scene benchmark datasets: SUN RGB-D and NYU Depth V2, where, to the best of our knowledge, state-of-the-art results were achieved on both datasets, which present evidences of the effectiveness of the proposed approach.

Abstract (translated)

室内场景通常具有零散的物体及其关系,这使得室内场景分类任务成为具有挑战性的计算机视觉任务。尽管近年来基于深度学习的方法在分类任务方面取得了显著的性能提升,但类内模糊性和类间变异性等限制仍然阻碍了其性能。为了克服这些问题,通过收集语义信息来获得更完整和具有区分性的室内场景特征,已经证明是一种有前景的方法。因此,本文的工作既利用了从物体检测中获得的语义信息,也利用了语义分割技术。虽然物体检测技术提供了物体在二维位置,以便获得物体之间的空间分布,语义分割技术提供了像素级的关于分割类别形状和关系的信息,因此,本文提出了一种利用语义分割掩码提供基于Hu-moments的分割类别形状描述的新方法,被称为基于分区的Hu-Moments特征(SHMFs)。此外,还提出了一个利用基于深度学习的全局特征、基于物体的特征和语义分割基于特征的三分支网络,该网络被称为GOS$^2$F$^2$App。GOS$^2$F$^2$App在两个室内场景基准数据集:SUN RGB-D和NYU Depth V2上进行了评估,据我们所知,在两个数据集上都实现了最先进的性能,这证明了所提出方法的有效性。

URL

https://arxiv.org/abs/2404.07739

PDF

https://arxiv.org/pdf/2404.07739.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot