[This article was first published on Xi'an's Og » R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Since Bayes factor approximation is one of my areas of interest, I was intrigued by Xiao-Li Meng’s comments during my poster in Benidorm that I was using the “wrong” bridge sampling estimator when trying to bridge two models of different dimensions, based on the completion (for and missing from the first model)
Therefore, the optimal choice of leads to the approximation
when and . More exactly, this approximation is replaced with an iterative version since it depends on the unknown . The choice of the density is obviously fundamental and it should be close to the true posterior to guarantee good convergence approximation. Using a normal approximation to the posterior distribution of or a non-parametric approximation based on a sample from , or yet again an average of MCMC proposals are reasonable choices.
The boxplot above compares this solution of Meng and Schilling (2002, JASA), called double (because two pseudo-posteriors and have to be introduced), with Chen, Shao and Ibragim (2001) solution based on a single completion (using a normal centred at the estimate of the missing parameter, and with variance the estimate from the simulation), when testing whether or not the mean of a normal model with unknown variance is zero. The variabilities are quite comparable in this admittedly overly simple case. Overall, the performances of both extensions are obviously highly dependent on the choice of the completion factors, and on the one hand and on the other hand, . The performances of the first solution, which bridges both models via , are bound to deteriorate as the dimension gap between those models increases. The impact of the dimension of the models is less keenly felt for the other solution, as the approximation remains local.