Confidence Distillation for Efficient Action Recognition
| dc.contributor.advisor | Chiang, Fei | |
| dc.contributor.advisor | Zheng, Rong | |
| dc.contributor.author | Manzuri Shalmani, Shervin | |
| dc.contributor.department | Computing and Software | en_US |
| dc.date.accessioned | 2021-01-03T01:29:08Z | |
| dc.date.available | 2021-01-03T01:29:08Z | |
| dc.date.issued | 2020 | |
| dc.description.abstract | Modern neural networks are powerful predictive models. However, when it comes to recognizing that they may be wrong about their predictions and measuring the certainty of beliefs, they perform poorly. For one of the most common activation functions, the ReLU and its variants, even a well-calibrated model can produce incorrect but high confidence predictions. In the related task of action recognition, most current classification methods are based on clip-level classifiers that densely sample a given video for non-overlapping, same sized clips and aggregate the results using an aggregation function - typically averaging - to achieve video level predictions. While this approach has shown to be effective, it is sub-optimal in recognition accuracy and has a high computational overhead. To mitigate both these issues, we propose the confidence distillation framework to firstly teach a representation of uncertainty of the teacher to the student and secondly divide the task of full video prediction between the student and the teacher models. We conduct extensive experiments on three action recognition datasets and demonstrate that our framework achieves state-of-the-art results in action recognition accuracy and computational efficiency. | en_US |
| dc.description.degree | Master of Science (MSc) | en_US |
| dc.description.degreetype | Thesis | en_US |
| dc.description.layabstract | We devise a distillation loss function to train an efficient sampler/classifier for video-based action recognition tasks. | en_US |
| dc.identifier.uri | http://hdl.handle.net/11375/26127 | |
| dc.language.iso | en | en_US |
| dc.subject | Deep Learning | en_US |
| dc.subject | Computer Vision | en_US |
| dc.subject | Artificial Intelligence | en_US |
| dc.subject | Efficient Inference | en_US |
| dc.subject | Regularization | en_US |
| dc.subject | Loss Function | en_US |
| dc.subject | Machine Learning | en_US |
| dc.subject | Distillation | en_US |
| dc.title | Confidence Distillation for Efficient Action Recognition | en_US |
| dc.type | Thesis | en_US |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- manzurishalmani_shervin_finalsubmission202012_masters.pdf
- Size:
- 7.76 MB
- Format:
- Adobe Portable Document Format
- Description:
- Primary Thesis File
License bundle
1 - 1 of 1
Loading...
- Name:
- license.txt
- Size:
- 1.68 KB
- Format:
- Item-specific license agreed upon to submission
- Description: