Please use this identifier to cite or link to this item:
http://hdl.handle.net/11375/32269
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | He, Wenbo | - |
dc.contributor.author | Xu, Zhiwei | - |
dc.date.accessioned | 2025-08-29T18:15:14Z | - |
dc.date.available | 2025-08-29T18:15:14Z | - |
dc.date.issued | 2025 | - |
dc.identifier.uri | http://hdl.handle.net/11375/32269 | - |
dc.description.abstract | In recent years, deep learning has become the foundation of modern computer vision applications, enabling machines to recognize objects, understand scenes, and make decisions based on visual data. However, deep neural networks can be vulnerable to security threats and unstable behavior, especially when exposed to adversarial inputs, poisoned training data, or complex real-world environments. This thesis presents three research efforts to improve the security and robustness of deep learning in computer vision. First, we propose a new defense method called Multi-Pronged Defense (MPD), which protects deep neural networks from backdoor attacks—hidden manipulations that cause models to behave incorrectly when triggered. MPD combines semi-supervised learning, balanced data sampling, and neuron suppression to effectively block various backdoor strategies across different datasets. Second, we design a novel attention mechanism for vision transformers that incorporates position-aware operations. This structure improves the model’s sensitivity to spatial patterns, similar to convolutional neural networks (CNNs), and achieves better performance than traditional self-attention while maintaining architectural flexibility. Third, we introduce the Balanced Object Detector (BOD), a new object detection framework that does not rely on feature pyramid networks (FPNs). By using consistent receptive fields and parameter sharing across detection branches, BOD achieves higher accuracy on small and medium objects and shows better generalization and resistance to adversarial attacks. Together, these contributions advance the development of deep learning models that are not only accurate, but also secure, stable, and reliable in real-world visual applications. | en_US |
dc.language.iso | en | en_US |
dc.subject | Computer Vision | en_US |
dc.subject | Backdoor Defense | en_US |
dc.subject | Object Detection | en_US |
dc.subject | Vision Transformer | en_US |
dc.title | Secure and Robust deep learning in computer vision | en_US |
dc.type | Thesis | en_US |
dc.contributor.department | Computing and Software | en_US |
dc.description.degreetype | Thesis | en_US |
dc.description.degree | Doctor of Philosophy (PhD) | en_US |
dc.description.layabstract | This thesis explores how to make deep learning in computer vision more secure and robust. It introduces a defense method against backdoor attacks, a new attention mechanism that improves spatial sensitivity in vision transformers, and a reliable object detection architecture that performs well on small objects and under adversarial conditions. Together, these methods enhance the stability, generalization, and trustworthiness of visual learning systems. | en_US |
Appears in Collections: | Open Access Dissertations and Theses |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Xu_Zhiwei_202508_Doctor-of-Philosophy.pdf | 8.18 MB | Adobe PDF | View/Open |
Items in MacSphere are protected by copyright, with all rights reserved, unless otherwise indicated.