Please use this identifier to cite or link to this item:
http://hdl.handle.net/11375/26127
Title: | Confidence Distillation for Efficient Action Recognition |
Authors: | Manzuri Shalmani, Shervin |
Advisor: | Chiang, Fei Zheng, Rong |
Department: | Computing and Software |
Keywords: | Deep Learning;Computer Vision;Artificial Intelligence;Efficient Inference;Regularization;Loss Function;Machine Learning;Distillation |
Publication Date: | 2020 |
Abstract: | Modern neural networks are powerful predictive models. However, when it comes to recognizing that they may be wrong about their predictions and measuring the certainty of beliefs, they perform poorly. For one of the most common activation functions, the ReLU and its variants, even a well-calibrated model can produce incorrect but high confidence predictions. In the related task of action recognition, most current classification methods are based on clip-level classifiers that densely sample a given video for non-overlapping, same sized clips and aggregate the results using an aggregation function - typically averaging - to achieve video level predictions. While this approach has shown to be effective, it is sub-optimal in recognition accuracy and has a high computational overhead. To mitigate both these issues, we propose the confidence distillation framework to firstly teach a representation of uncertainty of the teacher to the student and secondly divide the task of full video prediction between the student and the teacher models. We conduct extensive experiments on three action recognition datasets and demonstrate that our framework achieves state-of-the-art results in action recognition accuracy and computational efficiency. |
URI: | http://hdl.handle.net/11375/26127 |
Appears in Collections: | Open Access Dissertations and Theses |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
manzurishalmani_shervin_finalsubmission202012_masters.pdf | Primary Thesis File | 7.95 MB | Adobe PDF | View/Open |
Items in MacSphere are protected by copyright, with all rights reserved, unless otherwise indicated.