Trial until first succes
[This article was first published on Wiekvoet, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Studying on in Bayesian Approaches to Clinical Trials and Health-Care Evaluation (David J. Spiegelhalter, Keith R. Abrams, Jonathan P. Myles) they state that if you have one success in a trial, that a design is needed to know what that means. For example with six trials and one success, Were there six trials and one success, or were there trials until the first success? Just for fun, I am trying what I can learn with that latter setup. Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
One sided hypothesis
For a hypothesis of 50% chance of success, it is pretty obvious that only the one sided hypothesis of less than 50% success can be alternative. The hypothesis would be rejected if it takes enough trials to get the first success. This can be simulated pretty easy. It can also be calculated, but for that it seems better to estimate the chances of 1 to 5 trials and take the remainder rather then make a sum of a potential infinite number of fails before the first success. For comparison the binomial test is added. Again one sided, probability of upto one success.empprob <- function(p,n=1000) {
samps <- sample(c(FALSE,TRUE),n,replace=TRUE,p=c(1-p,p))
ww <- which(samps)
dd <- diff(c(1,ww))
sum(dd>=6)/length(dd)
}
pr <- seq(0,1,.005)
sa <- sapply(pr,empprob,n=1e4)
dbin <- function(size,prob,log=FALSE) {
if (log)
log(prob)+(size-1)*log(1-prob) else
prob*(1-prob)^(size-1)
}
pbin <- function(size,prob) {
1-sum(dbin(1:(size-1),prob))
}
sa2 <- sapply(pr,function(x) pbin(6,x))
sa3 <- pbinom(1,6,pr)
plot(x=pr,
y=sa,
type=’l’,
ylab=’Alpha’,
xlab=’probability under H0: pr>p’,
main=’Five fail one succes’)
lines(x=pr,
y=sa2,
type=’l’,
col=’blue’)
lines(x=pr,
y=sa3,
type=’l’,
col=’red’)
legend(x=’topright’,col=c(‘black’,’blue’,’red’),
lty=1,legend=c(‘simulation, trials until first succes’,
‘calculation, trials until first succes’,
‘Binomial’))
Bayes estimate
Obviously, a Bayes estimate is interesting. Using LaplacesDemon I can reuse the density which I developed previously. Note that in hindsight for one experiment it should be equivalent to a binomial. After all, there is a factor six (six permutations) in the density, but that cancels out.library(‘LaplacesDemon’)
mon.names <- "LP"
parm.names <- c('x','y')
MyData <- list(mon.names=mon.names, parm.names=parm.names)
N <-1
Model <- function(parm, Data)
{
x <- parm[1]
y <- parm[2]
LL <- dbin(6,x,log=TRUE)+dbinom(1,6,y,log=TRUE)
LP <- sum(dbeta(parm,.5,.5,log=TRUE))+LL
Modelout <- list(LP=LP, Dev=-2*LL, Monitor=LL,
yhat=x, parm=parm)
return(Modelout)
}
Initial.Values <- c(.5,.5)
Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values)
Fit
Summary of Stationary Samples
Mean SD MCSE ESS LB Median
x 0.2184365 0.1409504 0.011367733 277.9335 0.02194684 0.1991762
y 0.2087487 0.1358410 0.009702183 314.1335 0.02145930 0.1855887
Deviance 8.8469564 1.7228087 0.124345091 341.6582 7.30631061 8.1527719
LP -4.4234782 0.8614044 0.062172546 341.6582 -6.82652015 -4.0763859
UB
x 0.5671549
y 0.5254163
Deviance 13.6530403
LP -3.6531553
SAS
Proc MCMC has a general distribution where one can use an own distribution. Hence it is easy to do the same Bayes estimate in SAS too. Hence I took this as a toy example to try this feature.data a;
size=6;
output;
run;
proc mcmc data=a;
parm prob .5;
prior prob ~uniform(0, 1);
ll=log(1-prob)+(size-1)*log(prob);
model size ~ dgeneral(ll);
run;
To leave a comment for the author, please follow the link and comment on their blog: Wiekvoet.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.