Paper Reading AI Learner

Robust Scatterer Number Density Segmentation of Ultrasound Images

2022-01-16 22:08:47
Ali K. Z. Tehrani, Ivan M. Rosado-Mendez, Hassan Rivaz

Abstract

Quantitative UltraSound (QUS) aims to reveal information about the tissue microstructure using backscattered echo signals from clinical scanners. Among different QUS parameters, scatterer number density is an important property that can affect estimation of other QUS parameters. Scatterer number density can be classified into high or low scatterer densities. If there are more than 10 scatterers inside the resolution cell, the envelope data is considered as Fully Developed Speckle (FDS) and otherwise, as Under Developed Speckle (UDS). In conventional methods, the envelope data is divided into small overlapping windows (a strategy here we refer to as patching), and statistical parameters such as SNR and skewness are employed to classify each patch of envelope data. However, these parameters are system dependent meaning that their distribution can change by the imaging settings and patch size. Therefore, reference phantoms which have known scatterer number density are imaged with the same imaging settings to mitigate system dependency. In this paper, we aim to segment regions of ultrasound data without any patching. A large dataset is generated which has different shapes of scatterer number density and mean scatterer amplitude using a fast simulation method. We employ a convolutional neural network (CNN) for the segmentation task and investigate the effect of domain shift when the network is tested on different datasets with different imaging settings. Nakagami parametric image is employed for the multi-task learning to improve the performance. Furthermore, inspired by the reference phantom methods in QUS, A domain adaptation stage is proposed which requires only two frames of data from FDS and UDS classes. We evaluate our method for different experimental phantoms and in vivo data.

Abstract (translated)

URL

https://arxiv.org/abs/2201.06143

PDF

https://arxiv.org/pdf/2201.06143.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot