Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
A surprising paper came out in the last issue of Statistical Science, linking martingales and Bayes factors. In the historical part, the authors (Shafer, Shen, Vereshchagin and Vovk) recall that martingales were popularised by Martin-Löf, who is also influential in the theory of algorithmic randomness. A property of test martingales (i.e., martingales that are non negative with expectation one) is that
which makes their sequential maxima p-values of sorts. I had never thought about likelihood ratios this way, but it is true that a (reciprocal) likelihood ratio
is a martingale when the observations are distributed from p. The authors define a Bayes factor (for P) as satisfying (Section 3.2)
which I find hard to relate to my understanding of Bayes factors because there is no prior nor parameter involved. I first thought there was a restriction to simple null hypotheses. However, there is a composite versus composite example (Section 8.5, Binomial probability being less than or large than 1/2). So P would then be the marginal likelihood. In this case the test martingale is
Simulating the martingale is straightforward, however I do not recover the picture they obtain (Fig. 6):
x=sample(0:1,10^4,rep=TRUE,prob=c(1-theta,theta)) s=cumsum(x) ma=pbinom(s,1:10^4,.5,log.p=TRUE)-pbinom(s-1,1:10^4,.5,log.p=TRUE,lower.tail=FALSE) plot(ma,type="l") lines(cummin(ma),lty=2) #OR lines(cummin(ma),lty=2) lines(log(0.1)+0.9*cummin(ma),lty=2,col="steelblue") #OR cummax
When theta is not 1/2, the sequence goes down almost linearly to -infinity.
Filed under: R, Statistics Tagged: Bayes factor, Martin-Löf, martingales, Statistical Science
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.