Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
I am implementing a Markov-Chain Monte Carlo method for Gibbs sampling from a simple mixture of normal model. I am using a decision-theoretic approach per Stephens (2000), but am also looking at simpler methods. Specifically, I have used:
i) mean ordering
and also tried
ii) ordering by the mixture component weights
I have gotten much better results with the former approach than the later. I was wondering if (ii) was, in fact, an accepted approach? Per Stephens (2000), it seems ordering is only done by means or variances.PS- By ordering by the cluster weights w (taken from Dirichlet distribution as I believe is standard I mean):
i) discard burn-in
ii) for each iteration check order of $ latex w$’s. if order satisfies our condition (e.g.,keep the iteration; otherwise, discard this iteration MIsha xxxx
Obviously, any ordering creates an identifiability constraint and is equally acceptable. Or not. Indeed, my opinion on ordering is now the same as it was at the time our 2000 JASA paper got published: the cut (or more exactly quotienting) of the parameter space created by the ordering is not tuned to the topology of the likelihood/posterior surface. Therefore, the resulting subset may well contain incomplete parts of several modes, instead of concentrating on one of the
Note that the above email implements the ordering by discarding wrongly ordered
Filed under: pictures, R, Statistics
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.