Skip navigation
  • Home
  • Browse
    • Communities
      & Collections
    • Browse Items by:
    • Publication Date
    • Author
    • Title
    • Subject
    • Department
  • Sign on to:
    • My MacSphere
    • Receive email
      updates
    • Edit Profile


McMaster University Home Page
  1. MacSphere
  2. Open Access Dissertations and Theses Community
  3. Open Access Dissertations and Theses
Please use this identifier to cite or link to this item: http://hdl.handle.net/11375/30418
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorSaha, Ashirbani-
dc.contributor.authorPouromidi, Mahan-
dc.date.accessioned2024-10-13T01:18:31Z-
dc.date.available2024-10-13T01:18:31Z-
dc.date.issued2024-
dc.identifier.urihttp://hdl.handle.net/11375/30418-
dc.description.abstractIn this work, we focus on the segmentation of tumors on PET/CT [Positron Emission Tomography used with Computed Tomography], which is crucial in routine clinical oncology. Based on the advances in recent deep learning-based methodologies, we studied the relative performances of three different frameworks: (a) nnU-Net [Convolutional Neural Network (CNN)-based], (b) nnU-Net with prompting a large Vision-Transformer (ViT) model called Segment Anything Model (SAM) (Hybrid), and (c) Swin-Unet (U-Net-like pure transformer) in a publicly available dataset of PET/CT images including normal patients and patients with lung cancer, lymphoma, and melanoma. Our study includes a holistic performance analysis for three cancer types and normal cases, which is typically avoided in the literature. The image volumes with cancer typically include more than one lesion (primary tumor and potential metastases). Therefore, we conducted two types of analyses. Our first analysis is conducted at an image volume level, considering all lesions together as foreground, and the rest as background. For the second analysis, we executed connected-component labelling to algorithmically label different parts of the tumor and assessed at lesion component level. At image volume level, nnU-Net performed best for lung cancer (Dice score: 73.25%) compared to melanoma (63%) and lymphoma (72.6%) among the three methods. The median largest lesion component-wise Dice score for nnU-Net, SAM with nnU-Net prompts, and Swin-Unet on three cancer types combined are 85%, 67%, and 72%, respectively. Both nnU-Net and SAM with nnU-Net approaches missed 2, 4, and 4 image volumes of lung cancer, lymphoma, and melanoma patients, resp., whereas Swin-Unet did not miss a single volume. Out of 513 normal volumes, 201 were successfully identified by nnU-Net and SAM, whereas Swin-Unet only identified 7 of them. In conclusion, the performance of models varied across the cancer types. nnU-Net proved to be the most reliable and precise algorithm evaluated in this study by showing the best performance for identifying normal patients and in delineating the largest lesions.en_US
dc.language.isoenen_US
dc.titleAn Investigation of Advanced Deep Learning-Based Automated Models for Tumor Segmentation in Whole-body PET/CT Imagesen_US
dc.title.alternativeAN INVESTIGATION OF AUTOMATED MODELS FOR TUMOR SEGMENTATIONen_US
dc.typeThesisen_US
dc.contributor.departmentBiomedical Engineeringen_US
dc.description.degreetypeThesisen_US
dc.description.degreeMaster of Applied Science (MASc)en_US
Appears in Collections:Open Access Dissertations and Theses

Files in This Item:
File Description SizeFormat 
Pouromidi_Mahan_202409_MASc.pdf
Access is allowed from: 2025-03-31
10.07 MBAdobe PDFView/Open
Show simple item record Statistics


Items in MacSphere are protected by copyright, with all rights reserved, unless otherwise indicated.

Sherman Centre for Digital Scholarship     McMaster University Libraries
©2022 McMaster University, 1280 Main Street West, Hamilton, Ontario L8S 4L8 | 905-525-9140 | Contact Us | Terms of Use & Privacy Policy | Feedback

Report Accessibility Issue