Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
An interesting note was arXived a few days ago by Madeleine Thompson and Radford Neal. Beside the nice touch of mixing crumbs and slices, the neat idea is to have multiple-try proposals for simulating within a slice and to decrease the dimension of the simulation space at each try. This dimension diminution is achieved via the construction of an orthogonal basis based on the gradient of the log densities at previously-rejected proposals.
until all dimensions are exhausted, in which case the scale of the Gaussian proposal is reduced. (The paper comes with R and C codes.) Provided the gradient can be computed (or at least approximated), this is a fairly general method (even though I have not tested it so cannot say how much calibration it requires). An interesting point is that, contrariwise to the delayed-rejection method of Antonietta Mira and co-authors, the repeated proposals do not induce a complexification in the slice acceptance probability. I am less convinced by the authors’ conclusion that the method compares with adaptive Metropolis techniques, in the sense the “shrinking rank” method forgets about past experiences as it starts from scratch at each iteration: it is thus not really learning… (Now, in terms of performances, this may be the case!)
Filed under: Books, R, Statistics Tagged: adaptive MCMC methods, crumbs, Gaussian random walk, MCMC, Monte Carlo Statistical Methods, R, slice sampling, smilation
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.