Abstract
Cell image segmentation is usually implemented using fully supervised deep learning methods, which heavily rely on extensive annotated training data. Yet, due to the complexity of cell morphology and the requirement for specialized knowledge, pixel-level annotation of cell images has become a highly labor-intensive task. To address the above problems, we propose an active learning framework for cell segmentation using bounding box annotations, which greatly reduces the data annotation cost of cell segmentation algorithms. First, we generate a box-supervised learning method (denoted as YOLO-SAM) by combining the YOLOv8 detector with the Segment Anything Model (SAM), which effectively reduces the complexity of data annotation. Furthermore, it is integrated into an active learning framework that employs the MC DropBlock method to train the segmentation model with fewer box-annotated samples. Extensive experiments demonstrate that our model saves more than ninety percent of data annotation time compared to mask-supervised deep learning methods.
Abstract (translated)
细胞图像分割通常采用完全监督的深度学习方法来实现,这依赖于大量的注释训练数据。然而,由于细胞形态的复杂性和需要专业知识的要求,细胞图像的每个像素级注释已成为一个高度费力的工作。为解决上述问题,我们提出了一个使用边界框注释的细胞分割框架,这大大减少了细胞分割算法的数据注释成本。首先,通过将YOLOv8检测器与Segment Anything模型(SAM)结合,我们生成了一种框监督学习方法(表示为YOLO-SAM),这有效地减少了数据注释的复杂性。此外,它还集成到一个采用MC DropBlock方法来训练分割模型的主动学习框架中。大量实验证明,我们的模型比掩膜监督的深度学习方法节省了90%以上的数据注释时间。
URL
https://arxiv.org/abs/2405.01701