Journal article
arXiv.org, 2023
APA
Click to copy
Roy, S., Wald, T., Koehler, G., Rokuss, M. R., Disch, N., Holzschuh, J., … Maier-Hein, K. (2023). SAM.MD: Zero-shot medical image segmentation capabilities of the Segment Anything Model. ArXiv.org.
Chicago/Turabian
Click to copy
Roy, Saikat, T. Wald, Gregor Koehler, Maximilian R. Rokuss, Nico Disch, J. Holzschuh, David Zimmerer, and K. Maier-Hein. “SAM.MD: Zero-Shot Medical Image Segmentation Capabilities of the Segment Anything Model.” arXiv.org (2023).
MLA
Click to copy
Roy, Saikat, et al. “SAM.MD: Zero-Shot Medical Image Segmentation Capabilities of the Segment Anything Model.” ArXiv.org, 2023.
BibTeX Click to copy
@article{saikat2023a,
title = {SAM.MD: Zero-shot medical image segmentation capabilities of the Segment Anything Model},
year = {2023},
journal = {arXiv.org},
author = {Roy, Saikat and Wald, T. and Koehler, Gregor and Rokuss, Maximilian R. and Disch, Nico and Holzschuh, J. and Zimmerer, David and Maier-Hein, K.}
}
Foundation models have taken over natural language processing and image generation domains due to the flexibility of prompting. With the recent introduction of the Segment Anything Model (SAM), this prompt-driven paradigm has entered image segmentation with a hitherto unexplored abundance of capabilities. The purpose of this paper is to conduct an initial evaluation of the out-of-the-box zero-shot capabilities of SAM for medical image segmentation, by evaluating its performance on an abdominal CT organ segmentation task, via point or bounding box based prompting. We show that SAM generalizes well to CT data, making it a potential catalyst for the advancement of semi-automatic segmentation tools for clinicians. We believe that this foundation model, while not reaching state-of-the-art segmentation performance in our investigations, can serve as a highly potent starting point for further adaptations of such models to the intricacies of the medical domain. Keywords: medical image segmentation, SAM, foundation models, zero-shot learning