Skip navigation
  • Home
  • Browse
    • Communities
      & Collections
    • Browse Items by:
    • Publication Date
    • Author
    • Title
    • Subject
    • Department
  • Sign on to:
    • My MacSphere
    • Receive email
      updates
    • Edit Profile


McMaster University Home Page
  1. MacSphere
  2. Open Access Dissertations and Theses Community
  3. Open Access Dissertations and Theses
Please use this identifier to cite or link to this item: http://hdl.handle.net/11375/26127
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorChiang, Fei-
dc.contributor.advisorZheng, Rong-
dc.contributor.authorManzuri Shalmani, Shervin-
dc.date.accessioned2021-01-03T01:29:08Z-
dc.date.available2021-01-03T01:29:08Z-
dc.date.issued2020-
dc.identifier.urihttp://hdl.handle.net/11375/26127-
dc.description.abstractModern neural networks are powerful predictive models. However, when it comes to recognizing that they may be wrong about their predictions and measuring the certainty of beliefs, they perform poorly. For one of the most common activation functions, the ReLU and its variants, even a well-calibrated model can produce incorrect but high confidence predictions. In the related task of action recognition, most current classification methods are based on clip-level classifiers that densely sample a given video for non-overlapping, same sized clips and aggregate the results using an aggregation function - typically averaging - to achieve video level predictions. While this approach has shown to be effective, it is sub-optimal in recognition accuracy and has a high computational overhead. To mitigate both these issues, we propose the confidence distillation framework to firstly teach a representation of uncertainty of the teacher to the student and secondly divide the task of full video prediction between the student and the teacher models. We conduct extensive experiments on three action recognition datasets and demonstrate that our framework achieves state-of-the-art results in action recognition accuracy and computational efficiency.en_US
dc.language.isoenen_US
dc.subjectDeep Learningen_US
dc.subjectComputer Visionen_US
dc.subjectArtificial Intelligenceen_US
dc.subjectEfficient Inferenceen_US
dc.subjectRegularizationen_US
dc.subjectLoss Functionen_US
dc.subjectMachine Learningen_US
dc.subjectDistillationen_US
dc.titleConfidence Distillation for Efficient Action Recognitionen_US
dc.typeThesisen_US
dc.contributor.departmentComputing and Softwareen_US
dc.description.degreetypeThesisen_US
dc.description.degreeMaster of Science (MSc)en_US
dc.description.layabstractWe devise a distillation loss function to train an efficient sampler/classifier for video-based action recognition tasks.en_US
Appears in Collections:Open Access Dissertations and Theses

Files in This Item:
File Description SizeFormat 
manzurishalmani_shervin_finalsubmission202012_masters.pdf
Open Access
Primary Thesis File7.95 MBAdobe PDFView/Open
Show simple item record Statistics


Items in MacSphere are protected by copyright, with all rights reserved, unless otherwise indicated.

Sherman Centre for Digital Scholarship     McMaster University Libraries
©2022 McMaster University, 1280 Main Street West, Hamilton, Ontario L8S 4L8 | 905-525-9140 | Contact Us | Terms of Use & Privacy Policy | Feedback

Report Accessibility Issue