Welcome to the upgraded MacSphere! We're putting the finishing touches on it; if you notice anything amiss, email macsphere@mcmaster.ca

Estimation of Predictive Uncertainty in the Supervised Segmentation of Magnetic Resonance Imaging (MRI) Diffusion Images Using Deep Ensemble Learning

dc.contributor.advisorNoseworthy, Michael
dc.contributor.advisorDoyle, Thomas E.
dc.contributor.authorMcCrindle, Brian
dc.contributor.departmentElectrical and Computer Engineeringen_US
dc.date.accessioned2024-03-06T20:51:15Z
dc.date.available2024-03-06T20:51:15Z
dc.date.issued2021
dc.description.abstractWith the desired deployment of Artificial Intelligence (AI), concerns over whether AI can “communicate” why it has made its decisions is of particular importance. In this thesis, we utilize predictive entropy (PE) as an surrogate for predictive uncertainty and report it for various test-time conditions that alter the testing distribution. This is done to evaluate the potential for PE to indicate when users should trust or dis- trust model predictions under dataset shift or out-of-distribution (OOD) conditions, two scenarios that are prevalent in real-world settings. Specifically, we trained an ensemble of three 2D-UNet architectures to segment synthetically damaged regions in fractional anisotropy scalar maps, a widely used diffusion metric to indicate mi- crostructural white-matter damage. Baseline ensemble statistics report that the true positive rate, false negative rate, false positive rate, true negative rate, Dice score, and precision are 0.91, 0.091, 0.23, 0.77, 0.85, and 0.80, respectively. Test-time PE was reported before and after the ensemble was exposed to increasing geometric distortions (OOD), adversarial examples (OOD), and decreasing signal-to-noise ratios (dataset shift). We observed that even though PE shows a strong negative correlation with model performance for increasing adversarial severity (ρAE = −1), this correlation is not seen under distortion or SNR conditions (ρD = −0.26, ρSNR = −0.30). However, the PE variability (PE-Std) between individual model predictions was shown to be a better indicator of uncertainty as strong negative correlations between model performance and PE-Std were seen during geometric distortions and adversarial ex- amples (ρD = −0.83, ρAE = −1). Unfortunately, PE fails to report large absolute uncertainties during these conditions, thus restricting the analysis to correlative relationships. Finally, determining an uncertainty threshold between “certain” and “uncertain” model predictions was seen to be heavily dependant on model calibra- tion. For augmentation conditions close to the training distribution, a single threshold could be hypothesized. However, caution must be taken if such a technique is clinically applied, as model miscalibration could nullify such a threshold for samples far from the distribution. To ensure that PE or PE-Std could be used more broadly for uncertainty estimation, further work must be completed.en_US
dc.description.degreeMaster of Applied Science (MASc)en_US
dc.description.degreetypeThesisen_US
dc.identifier.urihttp://hdl.handle.net/11375/29567
dc.language.isoenen_US
dc.subjectDeep Learningen_US
dc.subjectMRIen_US
dc.subjectmTBIen_US
dc.subjectSegmentationen_US
dc.titleEstimation of Predictive Uncertainty in the Supervised Segmentation of Magnetic Resonance Imaging (MRI) Diffusion Images Using Deep Ensemble Learningen_US
dc.title.alternativeESTIMATING PREDICTIVE UNCERTAINTY IN DEEP LEARNING SEGMENTATION FOR DIFFUSION MRIen_US
dc.typeThesisen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
McCrindle_Brian_C_finalsubmission2021October_MASc.pdf
Size:
10.72 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.68 KB
Format:
Item-specific license agreed upon to submission
Description: