Skip navigation
  • Home
  • Browse
    • Communities
      & Collections
    • Browse Items by:
    • Publication Date
    • Author
    • Title
    • Subject
    • Department
  • Sign on to:
    • My MacSphere
    • Receive email
      updates
    • Edit Profile


McMaster University Home Page
  1. MacSphere
  2. Open Access Dissertations and Theses Community
  3. Open Access Dissertations and Theses
Please use this identifier to cite or link to this item: http://hdl.handle.net/11375/9368
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorM., Peter D.en_US
dc.contributor.authorLiu, Zhihuien_US
dc.date.accessioned2014-06-18T16:46:50Z-
dc.date.available2014-06-18T16:46:50Z-
dc.date.created2011-06-03en_US
dc.date.issued2010-08en_US
dc.identifier.otheropendissertations/4499en_US
dc.identifier.other5518en_US
dc.identifier.other2045371en_US
dc.identifier.urihttp://hdl.handle.net/11375/9368-
dc.description.abstract<p>Mixture distributions are typically used to model data in which each observation belongs to one of some number of different groups. They also provide a convenient and flexible class of models for density estimation. When the number of components <em>k</em> is assumed known, the Gibbs sampler can be used for Bayesian estimation of the component parameters. We present the implementation of the Gibbs sampler for mixtures of Normal distributions and show that, spurious modes can be avoided by introducing a Gamma prior in the Kiefer-Wolfowitz example.</p> <p>Adopting a Bayesian approach for mixture models has certain advantages; it is not without its problems. One typical problem associated with mixtures is nonidentifiability of the Gomponent parameters. This causes label switching in the Gibbs sampler output and makes inference for the individual components meaningless. We show that the usual approach to this problem by imposing simple identifiability constraints on the mixture parameters is sometimes inadequate, and present an alternative approach by arranging the mixture components in order of non-decreasing means whilst choosing priors that are slightly more informative. We illustrate the success of our approach on the fishery example.</p> <p>When the number of components <em>k</em> is considered unknown, more sophisticated methods are required to perform the Bayesian analysis. One method is the Reversible Jump MCMC algorithm described by Richardson and Green (1997), which they applied to univariate Normal mixtures. Alternatively, selection of <em>k</em> can be based on a comparison of models fitted with different numbers of components by some joint measures of model fit and model complexity. We review these methods and illustrate how to use them to compare competing mixture models using the acidity data.</p> <p><br />We conclude with some suggestions for further research.</p>en_US
dc.subjectStatistics and Probabilityen_US
dc.subjectStatistics and Probabilityen_US
dc.titleBayesian Mixture Modelsen_US
dc.typethesisen_US
dc.contributor.departmentStatisticsen_US
dc.description.degreeMaster of Science (MS)en_US
Appears in Collections:Open Access Dissertations and Theses

Files in This Item:
File SizeFormat 
fulltext.pdf
Open Access
2.69 MBAdobe PDFView/Open
Show simple item record Statistics


Items in MacSphere are protected by copyright, with all rights reserved, unless otherwise indicated.

Sherman Centre for Digital Scholarship     McMaster University Libraries
©2022 McMaster University, 1280 Main Street West, Hamilton, Ontario L8S 4L8 | 905-525-9140 | Contact Us | Terms of Use & Privacy Policy | Feedback

Report Accessibility Issue