Skip navigation
  • Home
  • Browse
    • Communities
      & Collections
    • Browse Items by:
    • Publication Date
    • Author
    • Title
    • Subject
    • Department
  • Sign on to:
    • My MacSphere
    • Receive email
      updates
    • Edit Profile


McMaster University Home Page
  1. MacSphere
  2. Open Access Dissertations and Theses Community
  3. Open Access Dissertations and Theses
Please use this identifier to cite or link to this item: http://hdl.handle.net/11375/13017
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorJanicki, Ryszarden_US
dc.contributor.advisorQiao, S.en_US
dc.contributor.authorSoudkhah, Mohammad Hadien_US
dc.date.accessioned2014-06-18T17:01:55Z-
dc.date.available2014-06-18T17:01:55Z-
dc.date.created2013-06-12en_US
dc.date.issued2013-10en_US
dc.identifier.otheropendissertations/7852en_US
dc.identifier.other8936en_US
dc.identifier.other4218445en_US
dc.identifier.urihttp://hdl.handle.net/11375/13017-
dc.description.abstract<p>Most existing classification algorithms either consider all features as equally important (equal weights), or do not analyze consistency of weights assigned to features. When features are not equally important, assigning consistent weights is a not obvious task. In general we have two cases. The first case assumes that a given sample of data does not contain any clue about the importance of features, so the weights are provided by a pool of experts and they are usually inconsistent. The second case assumes that the given sample contains some information about features importance, hence we can derive the weights directly from the sample. In this thesis we deal with both cases. Pairwise Comparisons and Weighted Support Vector Machines are used for the first case. For the second case a new approach based on the observation that the feature importance could be determined by the discrimination power of features has been proposed. For the first case, we start with pairwise comparisons to rank the importance of features, then we use distance-based inconsistency reduction to refine the weights assessment and make comparisons more precise. As the next step we calculate the weights through the fully-consistent or almost consistent pairwise comparison tables. For the second case, a novel concept of feature domain overlappings has been introduced. It can measure the feature discrimination power. This model is based on the assumption that less overlapping means more discrimination ability, and produces weights characterizing the importance of particular features. For both cases Weighted Support Vector Machines are used to classify the data. Both methods have been tested using two benchmark data sets, Iris and Vertebal.</p> <p>The results were especially superior to those obtained without weights.</p>en_US
dc.subjectClassificationen_US
dc.subjectArtificial Intelligenceen_US
dc.subjectSupport Vector Machinesen_US
dc.subjectPairwise Comparisonen_US
dc.subjectInconsistencyen_US
dc.subjectOther Computer Engineeringen_US
dc.subjectOther Computer Engineeringen_US
dc.titleWeighted Feature Classificationen_US
dc.typethesisen_US
dc.contributor.departmentComputer Scienceen_US
dc.description.degreeMaster of Computer Science (MCS)en_US
Appears in Collections:Open Access Dissertations and Theses

Files in This Item:
File SizeFormat 
fulltext.pdf
Open Access
5.33 MBAdobe PDFView/Open
Show simple item record Statistics


Items in MacSphere are protected by copyright, with all rights reserved, unless otherwise indicated.

Sherman Centre for Digital Scholarship     McMaster University Libraries
©2022 McMaster University, 1280 Main Street West, Hamilton, Ontario L8S 4L8 | 905-525-9140 | Contact Us | Terms of Use & Privacy Policy | Feedback

Report Accessibility Issue