[This article was first published on Xi'an's Og » R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
After a terrific run this morning to the top of Arthur’s Seat, and then around (the ribs are feeling fine, now!), the Bayes-250 talks were exhilarating and challenging. Jim Smith gave an introduction to the challenges of getting different experts to collaborate on a complex risk assessment, much in the spirit of his book, that got me wondering about experts with their own agenda/utility function. For instance, in the case of the recent Fukushima disaster, experts from the electricity company could not be expected to provide trustworthy answers… This meant the assessor’s loss function had to account for this bias in the experts’ opinions. John Winn (from Microsoft Cambridge) argued about developing probabilistic programming, which meant incorporating functions and types like
random (along a specific probability distribution);
constrain (meaning including data constraints);
infer (deriving the posterior distribution);
to the standard programming languages. The idea is conceptually interesting, and the notion of linking Thomas Bayes with Ada Byron Lovelace promising in a steampunk universe, however I remain unconvinced by the universality of the target, as approximations such as EP and variational Bayes need to be introduced for the fast computation of the posterior distribution. Peter Green presented an acceleration device for simulating over decomposable graphs, thanks to the use of the junction tree. Zoubin Gharhamani recalled the origins of the Indian buffet process and provided on the go a list of anti-Bayesian myths that was quite worth posting. Neil Lawrence showed us an interesting work on latent forces, in the mechanical sense, even though I could not keep track with the mechanics behind it! In the afternoon, Michael Goldstein told us why Bayes theorem does not work (not that I agree with him on that point!), Peggy Series explained how, fascinatingly, the brain processes information in a Bayesian manner with an “optimal” prior, and Nicolas Chopin talked about his expectation-propagation summary-less likelihood-free algorithm I discussed a few days ago. We also had a lively discussion about ABC model choice, from the choice of the metric distance to the impact of the summary statistics. The meeting ended with Andrew Fraser telling us about Thomas Bayes’ studies at Edinburgh in 1719-1721 and a quick stroll through the Old College. The day ended in a surprising pub, The Blind Poet, and a so-so south Indian restaurant, when we most surprisingly bumped into Marc Suchard, visiting collaborators in Edinburgh!