Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
The second day of the ABC wwworkshop got a better start than yesterday [for me] as I managed to bike to Dauphine early enough to watch the end of Gael’s talk and Matias Quiroz’ in full on the Australian side (of zoom). With an interesting take on using frequency-domain (pseudo-)likelihoods in complex models. Followed by two talks by David Frazier from Monash and Chris Drovandi from Brisbane on BSL, the first on misspecification with a finer analysis as to why synthetic likelihood may prove worse: the Mahalanobis distance behind it may get very small and the predictive distribution of the distance may become multimodal. Also pointing out the poor coverage of both ABC and BSL credible intervals. And Chris gave a wide-ranging coverage of summary-free likelihood-free approaches, with examples where they were faring well against some summary-based solutions. Olivier from Grenoble [with a co-author from Monash, keeping up the Australian theme] discussed dimension reductions which could possibly lead to better summary statistics, albeit unrelated with ABC!
Riccardo Corradin considered this most Milanese problem of all problems (!), namely how to draw inference on completely random distributions. The clustering involved in this inference being costly, the authors using our Wasserstein ABC approach on the partitions, with a further link to our ABC-Gibbs algorithm (which Grégoire had just presented) for the tolerance selection. Marko Järvenpää presented an approach related with a just-published paper in Bayesian Analysis. with a notion of noisy likelihood modelled as a Gaussian process. Towards avoiding evaluating the (greedy) likelihood too often, as in the earlier Korrakitara et al. (2014). And coining the term of Bayesian Metropolis-Hastings sampler (as the regular Metropolis (Rosenbluth) is frequentist)! And Pedro Rodrigues discussed using normalising flows in poorly identified (or inverse) models. Raising the issue of validating this approximation to the posterior and connecting with earlier talks.
The afternoon session was a reply of the earliest talks from the Australian mirrors. Clara Grazian gave the first talk yesterday on using and improving a copula-based ABC, introducing empirical likelihood, Gaussian processes and splines. Leading to a question as to whether or not the copula family could be chosen by ABC tools. David Nott raised the issue of conflicting summary statistics. Illustrated by a Poisson example where using the pair made by the empirical mean and the empirical variance as summary: while the empirical mean is sufficient, conditioning on both leads to a different ABC outcome. Which indirectly relates to a work in progress in our Dauphine group. Anthony Ebert discussed the difficulty of handling state space model parameters with ABC. In an ABCSMC² version, the likelihood is integrated out by a particle filter approximation but leading to difficulties with the associated algorithm, which I somewhat associate with the discrete nature of the approximation, possibly incorrectly. Jacob Priddle’s talked about a whitening version of Bayesian synthetic likelihood. By arguing that the variance of the Monte Carlo approximation to the moments of the Normal synthetic likelihood is much improved when assuming that the components of the summary statistic are independent. I am somewhat puzzled by the proposal, though, in that the whitening matrix need be estimated as well.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.