Abstract
In interventional radiology, short video sequences of vein structure in motion are captured in order to help medical personnel identify vascular issues or plan intervention. Semantic segmentation can greatly improve the usefulness of these videos by indicating exact position of vessels and instruments, thus reducing the ambiguity. We propose a real-time segmentation method for these tasks, based on U-Net network trained in a Siamese architecture from automatically generated annotations. We make use of noisy low level binary segmentation and optical flow to generate multi class annotations that are successively improved in a multistage segmentation approach. We significantly improve the performance of a state of the art U-Net at the processing speeds of 90fps.
Abstract (translated)
在介入放射学中,捕获运动中静脉结构的短视频序列,以帮助医务人员识别血管问题或计划干预。通过指示血管和器械的准确位置,语义分割可以极大地提高这些视频的实用性,从而减少模糊性。我们提出了一种基于U-Net网络的这些任务的实时分割方法,该网络是通过自动生成的注释在Siamese架构中训练的。我们利用噪声低级二进制分割和光流来生成多级注释,这些注释在多级分割方法中连续得到改进。我们以90fps的处理速度显着提高了最先进的U-Net的性能。
URL
https://arxiv.org/abs/1805.06406