Abstract
Melanoma is a highly aggressive form of skin cancer, and accurately segmenting pigmented lesions is crucial for early diagnosis. However, manual annotation is time-consuming, subjective, and requires expertise from dermatologists. To address these challenges, we propose a fully unsupervised segmentation approach using grounded-segment-anything model (Grounded-SAM), which operates without the need for annotated training data. Grounded-SAM combines Grounding-DINO for object detection with the SAM for segmentation. We evaluated the method using 2,594 dermoscopic images from the 2018 international skin imaging collaboration (ISIC) dataset and compared its performance with Otsu’s thresholding method, using the Jaccard index as the primary evaluation metric. The proposed method achieved an average Jaccard coefficient of 0.509, which significantly outperformed Otsu’s method (0.386, p < 0.05). Moreover, for images in which Grounding-DINO successfully detected lesion bounding boxes, segmentation accuracy improved further, yielding a coefficient of 0.588. Despite these improvements, the method exhibited limitations in cases with low-contrast lesions, small lesion areas, and significant vignetting. Our findings demonstrate that Grounded-SAM enables the effective segmentation of pigmented skin lesions without requiring annotated datasets, providing a promising solution for the reduction of annotation costs in AI-assisted melanoma detection. Future research should focus on prompt selection and integrating hybrid annotation strategies to enhance clinical applicability.
Published on: March 28, 2025
doi: 10.17756/micr.2025-116
Citation: Nagaoka T. 2025. Accurate Segmentation of Pigmented Skin Lesions Using Grounded-segment-anything. J Med Imaging Case Rep 9(1): 20-25.
Downloads