Please use this identifier to cite or link to this item:
http://hdl.handle.net/11375/30418
Title: | An Investigation of Advanced Deep Learning-Based Automated Models for Tumor Segmentation in Whole-body PET/CT Images |
Other Titles: | AN INVESTIGATION OF AUTOMATED MODELS FOR TUMOR SEGMENTATION |
Authors: | Pouromidi, Mahan |
Advisor: | Saha, Ashirbani |
Department: | Biomedical Engineering |
Publication Date: | 2024 |
Abstract: | In this work, we focus on the segmentation of tumors on PET/CT [Positron Emission Tomography used with Computed Tomography], which is crucial in routine clinical oncology. Based on the advances in recent deep learning-based methodologies, we studied the relative performances of three different frameworks: (a) nnU-Net [Convolutional Neural Network (CNN)-based], (b) nnU-Net with prompting a large Vision-Transformer (ViT) model called Segment Anything Model (SAM) (Hybrid), and (c) Swin-Unet (U-Net-like pure transformer) in a publicly available dataset of PET/CT images including normal patients and patients with lung cancer, lymphoma, and melanoma. Our study includes a holistic performance analysis for three cancer types and normal cases, which is typically avoided in the literature. The image volumes with cancer typically include more than one lesion (primary tumor and potential metastases). Therefore, we conducted two types of analyses. Our first analysis is conducted at an image volume level, considering all lesions together as foreground, and the rest as background. For the second analysis, we executed connected-component labelling to algorithmically label different parts of the tumor and assessed at lesion component level. At image volume level, nnU-Net performed best for lung cancer (Dice score: 73.25%) compared to melanoma (63%) and lymphoma (72.6%) among the three methods. The median largest lesion component-wise Dice score for nnU-Net, SAM with nnU-Net prompts, and Swin-Unet on three cancer types combined are 85%, 67%, and 72%, respectively. Both nnU-Net and SAM with nnU-Net approaches missed 2, 4, and 4 image volumes of lung cancer, lymphoma, and melanoma patients, resp., whereas Swin-Unet did not miss a single volume. Out of 513 normal volumes, 201 were successfully identified by nnU-Net and SAM, whereas Swin-Unet only identified 7 of them. In conclusion, the performance of models varied across the cancer types. nnU-Net proved to be the most reliable and precise algorithm evaluated in this study by showing the best performance for identifying normal patients and in delineating the largest lesions. |
URI: | http://hdl.handle.net/11375/30418 |
Appears in Collections: | Open Access Dissertations and Theses |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Pouromidi_Mahan_202409_MASc.pdf | 10.07 MB | Adobe PDF | View/Open |
Items in MacSphere are protected by copyright, with all rights reserved, unless otherwise indicated.