Skip navigation
  • Home
  • Browse
    • Communities
      & Collections
    • Browse Items by:
    • Publication Date
    • Author
    • Title
    • Subject
    • Department
  • Sign on to:
    • My MacSphere
    • Receive email
      updates
    • Edit Profile


McMaster University Home Page
  1. MacSphere
  2. Open Access Dissertations and Theses Community
  3. Open Access Dissertations and Theses
Please use this identifier to cite or link to this item: http://hdl.handle.net/11375/29573
Title: Two-way Multi-input Generative Neural Network for Anomaly Event Detection and Localization
Authors: Yang, Mingchen
Advisor: Shirani, Shahram
Department: Electrical and Computer Engineering
Keywords: Anomaly detection and localization;Generative adversarial network;Two-way multi-input generative neural network
Publication Date: 2022
Abstract: Anomaly event detection has become increasingly important and is of great significance for real-time monitoring systems. However, developing a reliable anomaly detection and localization model still requires overcoming many challenging problems considering the ambiguity in the definition of an abnormal event and the lack of ground truth datasets for training. In this thesis, we propose a Two-way Multi-input Generative Neural Network (TMGNN), which is an unsupervised anomaly events detection and localization method based on Generative Adversarial Network (GAN). TMGNN is composed of two neural networks, an appearance generation neural network and a motion generation neural network. These two networks are trained on normal frames and their corresponding motion and mosaic frames respectively. In the testing steps, the trained model cannot properly reconstruct the anomalous objects since the network is trained only on normal frames and has not learned patterns of anomalous cases. With the help of our new patch-based evaluation method, we utilize the reconstruction error to detect and localize possible anomalous objects. Our experiments show that on the UCSD Pedestrain2 dataset, our approach achieves 96.5% Area Under Curve (AUC) and 94.1% AUC for the frame-level and pixel-level criteria, respectively, reaching the best classification results compared to other traditional and deep learning methods.
URI: http://hdl.handle.net/11375/29573
Appears in Collections:Open Access Dissertations and Theses

Files in This Item:
File Description SizeFormat 
Yang_Mingchen_Thesis_202212_MASc.pdf
Open Access
5.44 MBAdobe PDFView/Open
Show full item record Statistics


Items in MacSphere are protected by copyright, with all rights reserved, unless otherwise indicated.

Sherman Centre for Digital Scholarship     McMaster University Libraries
©2022 McMaster University, 1280 Main Street West, Hamilton, Ontario L8S 4L8 | 905-525-9140 | Contact Us | Terms of Use & Privacy Policy | Feedback

Report Accessibility Issue