Welcome to the upgraded MacSphere! We're putting the finishing touches on it; if you notice anything amiss, email macsphere@mcmaster.ca

An Investigation of Advanced Deep Learning-Based Automated Models for Tumor Segmentation in Whole-body PET/CT Images

dc.contributor.advisorSaha, Ashirbani
dc.contributor.authorPouromidi, Mahan
dc.contributor.departmentBiomedical Engineeringen_US
dc.date.accessioned2024-10-13T01:18:31Z
dc.date.available2024-10-13T01:18:31Z
dc.date.issued2024
dc.description.abstractIn this work, we focus on the segmentation of tumors on PET/CT [Positron Emission Tomography used with Computed Tomography], which is crucial in routine clinical oncology. Based on the advances in recent deep learning-based methodologies, we studied the relative performances of three different frameworks: (a) nnU-Net [Convolutional Neural Network (CNN)-based], (b) nnU-Net with prompting a large Vision-Transformer (ViT) model called Segment Anything Model (SAM) (Hybrid), and (c) Swin-Unet (U-Net-like pure transformer) in a publicly available dataset of PET/CT images including normal patients and patients with lung cancer, lymphoma, and melanoma. Our study includes a holistic performance analysis for three cancer types and normal cases, which is typically avoided in the literature. The image volumes with cancer typically include more than one lesion (primary tumor and potential metastases). Therefore, we conducted two types of analyses. Our first analysis is conducted at an image volume level, considering all lesions together as foreground, and the rest as background. For the second analysis, we executed connected-component labelling to algorithmically label different parts of the tumor and assessed at lesion component level. At image volume level, nnU-Net performed best for lung cancer (Dice score: 73.25%) compared to melanoma (63%) and lymphoma (72.6%) among the three methods. The median largest lesion component-wise Dice score for nnU-Net, SAM with nnU-Net prompts, and Swin-Unet on three cancer types combined are 85%, 67%, and 72%, respectively. Both nnU-Net and SAM with nnU-Net approaches missed 2, 4, and 4 image volumes of lung cancer, lymphoma, and melanoma patients, resp., whereas Swin-Unet did not miss a single volume. Out of 513 normal volumes, 201 were successfully identified by nnU-Net and SAM, whereas Swin-Unet only identified 7 of them. In conclusion, the performance of models varied across the cancer types. nnU-Net proved to be the most reliable and precise algorithm evaluated in this study by showing the best performance for identifying normal patients and in delineating the largest lesions.en_US
dc.description.degreeMaster of Applied Science (MASc)en_US
dc.description.degreetypeThesisen_US
dc.identifier.urihttp://hdl.handle.net/11375/30418
dc.language.isoenen_US
dc.titleAn Investigation of Advanced Deep Learning-Based Automated Models for Tumor Segmentation in Whole-body PET/CT Imagesen_US
dc.title.alternativeAN INVESTIGATION OF AUTOMATED MODELS FOR TUMOR SEGMENTATIONen_US
dc.typeThesisen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Pouromidi_Mahan_202409_MASc.pdf
Size:
9.84 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.68 KB
Format:
Item-specific license agreed upon to submission
Description: