![]() Additionally, they extend an active boundary loss formulation to act in 3D. The paper proposes an augmentation of a 2.5 attention UNet with a label propagation module to improve boundary predictions. The paper presents scribble2D5, a weekly supervised deep learning approach to segment volumetric medical images based on scribbles. Please describe the contribution of the paper.Extensive experiments on three public datasets demonstrate Scribble2D5 significantly outperforms current scribble-based methods and approaches the performance of fully-supervised ones. To achieve this, we augment a 2.5D attention UNet with a proposed label propagation module to extend semantic information from scribbles and a combination of static and active boundary prediction to learn ROI’s boundary and regularize its shape. In this paper, we propose a scribble-based volumetric image segmentation, Scribble2D5, which tackles 3D anisotropic image segmentation and improves boundary prediction. Furthermore, most current methods are designed for 2D image segmentation, which do not fully leverage the volumetric information if directly applied to image slices. However, because scribbles lack structure information of region of interest (ROI), existing scribble-based methods suffer from poor boundary localization. Recently, weakly-supervised image segmentation using weak annotations like scribbles has gained great attention, since such annotations are much easier to obtain compared to time-consuming and label-intensive labeling at the pixel/voxel level. Paper Info Reviews Meta-Review Author Feedback Post-rebuttal Meta-Reviews Back to top ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |