Please use this identifier to cite or link to this item:
http://hdl.handle.net/11375/28048
Title: | Anomaly Detection in Videos Based on Unsupervised Learning |
Authors: | Li, Shusheng |
Advisor: | He, Wenbo |
Department: | Computing and Software |
Publication Date: | 2022 |
Abstract: | Anomaly detection for videos serves an important role in real-world applications. There are two types of anomaly detection for videos: anomalous video detection and anomalous event detection. Anomalous video detection is related to organizing video resources, which enables video-sharing platforms or public sectors to focus on the entire video belonging to the certain category; anomalous event detection in videos is applicable in the scenarios like monitoring in smart homes, smart cities and Internet of Things, which allows surveillance cameras to upload events of interest only so as to suppress the network traffic and reduce storage space in the cloud. The two types of anomaly detection for videos are challenging. Due to the unavailability of abnormal samples, it is a cumbersome task to train an end-to-end deep supervised learning model. Meanwhile, video data representation is challenging because of the unstructured scheme of video contents. In this paper, we propose different methods for the two tasks. For anomalous video detection, we propose a LSTM-autoencoder-based adversarial learning model (“VidAnomaly”) without abnormal samples in the training stage. LSTM-autoencoder learns the temporal dependence of the input sequence and reconstructs the input sequence for adversarial learning. In the inference stage, for a given input abnormal sample, the model poorly reconstructs the sample and the reconstruction error would be high because the proposed model is trained merely on normal samples and its parameters are only suitable for reconstructing normal samples. Based on the high reconstruction error, we detect abnormal samples. For anomalous event detection in videos, we propose Onsite Event Detection (OED), a system that enables real-time event detection on edge. OED first trains a transformer-based autoencoder to learn the spatial-temporal representation of video data observed recently. Then it gains the ability to differentiate eccentric data patterns of events from routine. OED also features an updating strategy that adapts to the changing environment dynamically. As such, OED is capable of continuously detecting events of interest in video streams. We evaluate our approaches on different datasets in various scenarios (anomalous video detection and anomalous event detection in videos). The experimental results show that our approaches are effective and superior to other methods. |
URI: | http://hdl.handle.net/11375/28048 |
Appears in Collections: | Open Access Dissertations and Theses |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Li_Shusheng_202204_Doctor-of-Philosophy.pdf | 5.4 MB | Adobe PDF | View/Open |
Items in MacSphere are protected by copyright, with all rights reserved, unless otherwise indicated.