Skip navigation
  • Home
  • Browse
    • Communities
      & Collections
    • Browse Items by:
    • Publication Date
    • Author
    • Title
    • Subject
    • Department
  • Sign on to:
    • My MacSphere
    • Receive email
      updates
    • Edit Profile


McMaster University Home Page
  1. MacSphere
  2. Open Access Dissertations and Theses Community
  3. Open Access Dissertations and Theses
Please use this identifier to cite or link to this item: http://hdl.handle.net/11375/32269
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorHe, Wenbo-
dc.contributor.authorXu, Zhiwei-
dc.date.accessioned2025-08-29T18:15:14Z-
dc.date.available2025-08-29T18:15:14Z-
dc.date.issued2025-
dc.identifier.urihttp://hdl.handle.net/11375/32269-
dc.description.abstractIn recent years, deep learning has become the foundation of modern computer vision applications, enabling machines to recognize objects, understand scenes, and make decisions based on visual data. However, deep neural networks can be vulnerable to security threats and unstable behavior, especially when exposed to adversarial inputs, poisoned training data, or complex real-world environments. This thesis presents three research efforts to improve the security and robustness of deep learning in computer vision. First, we propose a new defense method called Multi-Pronged Defense (MPD), which protects deep neural networks from backdoor attacks—hidden manipulations that cause models to behave incorrectly when triggered. MPD combines semi-supervised learning, balanced data sampling, and neuron suppression to effectively block various backdoor strategies across different datasets. Second, we design a novel attention mechanism for vision transformers that incorporates position-aware operations. This structure improves the model’s sensitivity to spatial patterns, similar to convolutional neural networks (CNNs), and achieves better performance than traditional self-attention while maintaining architectural flexibility. Third, we introduce the Balanced Object Detector (BOD), a new object detection framework that does not rely on feature pyramid networks (FPNs). By using consistent receptive fields and parameter sharing across detection branches, BOD achieves higher accuracy on small and medium objects and shows better generalization and resistance to adversarial attacks. Together, these contributions advance the development of deep learning models that are not only accurate, but also secure, stable, and reliable in real-world visual applications.en_US
dc.language.isoenen_US
dc.subjectComputer Visionen_US
dc.subjectBackdoor Defenseen_US
dc.subjectObject Detectionen_US
dc.subjectVision Transformeren_US
dc.titleSecure and Robust deep learning in computer visionen_US
dc.typeThesisen_US
dc.contributor.departmentComputing and Softwareen_US
dc.description.degreetypeThesisen_US
dc.description.degreeDoctor of Philosophy (PhD)en_US
dc.description.layabstractThis thesis explores how to make deep learning in computer vision more secure and robust. It introduces a defense method against backdoor attacks, a new attention mechanism that improves spatial sensitivity in vision transformers, and a reliable object detection architecture that performs well on small objects and under adversarial conditions. Together, these methods enhance the stability, generalization, and trustworthiness of visual learning systems.en_US
Appears in Collections:Open Access Dissertations and Theses

Files in This Item:
File Description SizeFormat 
Xu_Zhiwei_202508_Doctor-of-Philosophy.pdf
Embargoed until: 2026-08-27
8.18 MBAdobe PDFView/Open
Show simple item record Statistics


Items in MacSphere are protected by copyright, with all rights reserved, unless otherwise indicated.

Sherman Centre for Digital Scholarship     McMaster University Libraries
©2022 McMaster University, 1280 Main Street West, Hamilton, Ontario L8S 4L8 | 905-525-9140 | Contact Us | Terms of Use & Privacy Policy | Feedback

Report Accessibility Issue