Paper Reading AI Learner

Multi-class motion-based semantic segmentation for ureteroscopy and laser lithotripsy

2021-04-02 22:47:21
Soumya Gupta, Sharib Ali, Louise Goldsmith, Ben Turney, Jens Rittscher

Abstract

Kidney stones represent a considerable burden for public health-care systems. Ureteroscopy with laser lithotripsy has evolved as the most commonly used technique for the treatment of kidney stones. Automated segmentation of kidney stones and laser fiber is an important initial step to performing any automated quantitative analysis of the stones, particularly stone-size estimation, that helps the surgeon decide if the stone requires more fragmentation. Factors such as turbid fluid inside the cavity, specularities, motion blur due to kidney movements and camera motion, bleeding, and stone debris impact the quality of vision within the kidney and lead to extended operative times. To the best of our knowledge, this is the first attempt made towards multi-class segmentation in ureteroscopy and laser lithotripsy data. We propose an end-to-end CNN-based framework for the segmentation of stones and laser fiber. The proposed approach utilizes two sub-networks: HybResUNet, a version of residual U-Net, that uses residual connections in the encoder path of U-Net and a DVFNet that generates DVF predictions which are then used to prune the prediction maps. We also present ablation studies that combine dilated convolutions, recurrent and residual connections, ASPP and attention gate. We propose a compound loss function that improves our segmentation performance. We have also provided an ablation study to determine the optimal data augmentation strategy. Our qualitative and quantitative results illustrate that our proposed method outperforms SOTA methods such as UNet and DeepLabv3+ showing an improvement of 5.2% and 15.93%, respectively, for the combined mean of DSC and JI in our invivo test dataset. We also show that our proposed model generalizes better on a new clinical dataset showing a mean improvement of 25.4%, 20%, and 11% over UNet, HybResUNet, and DeepLabv3+, respectively, for the same metric.

Abstract (translated)

URL

https://arxiv.org/abs/2104.01268

PDF

https://arxiv.org/pdf/2104.01268.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot