Optional stopping does not bias parameter estimates (if done correctly)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
- “… sequential testing appears to inflate the observed effect size”
- “discussion suggests to me that estimation is not straight forward?“
- “researchers who are interested in estimating population effect sizes should not use […] optional stopping“
- “we found that truncated RCTs provide biased estimates of effects on the outcome that precipitated early stopping” (Bassler et al., 2010)
Given the recent discussion on optional stopping and Bayes, I wanted to solicit opinions on the following thought experiment.Researcher A collects tap water samples in a city, tests them for lead, and stops collecting data once a t-test comparing the mean lead level to a “safe” level is significant at p <.05. After this optional stopping, researcher A computes a Bayesian posterior (with weakly informative prior), and reports the median of the posterior as the best estimate of the lead level in the city.Researcher B collects the same amount of water samples but with a pre-specified N, and then also computes a Bayesian estimate.Researcher C collects water samples from every single household in the city (effectively collecting the whole population).Hopefully we can all agree that the best estimate of the mean lead level in the city is obtained by researcher C. But do you think that the estimate of researcher B is closer to the one from researcher C and should be preferred over the estimate of researcher A? What – if anything – does this tell us about optional stopping and its influence on Bayesian estimates?
Let’s simulate the scenario (R code provided below) with the following settings:
- The true lead level in the city has a mean of 3 with a SD of 2
- The “safe” lead level is defined at 2.7 (or below)
A naive analysis
A valid analysis
- If effect sizes from samples with different sample sizes are combined, they must be meta-analytically weighted according to their sample size (or precision). Optional stopping (e.g., based on p-values, but also based on Bayes factors) leads to a conditional bias: If the study stops very early, the effect size must be overestimated (otherwise it wouldn’t have stopped with a significant p-value). But early stops have a small sample size, and in a meta-analysis, these extreme early stops will get a small weight.
- The determination of sample size (fixed vs. optional stopping) and the presence of publication bias are separate issues. By comparing strategy A and B, two issues are (at least implicitly) conflated: A does optional stopping and has publication bias, as she only reports the result if the study hits the threshold. Non-significant results go into the file drawer. B, in contrast, has a fixed sample size, and reports all results, without publication bias. You can do optional stopping without publication bias (stop if significant, but also report result if you didn’t hit the threshold before reaching n_max). Likewise, if B samples a fixed sample size, but only reports trials in which the effect size is close to a foregone conclusion, it will be very biased as well.
- The strategies A and B without publication bias report all outcomes
- Strategy A with publication bias reports only the studies, which are significantly lower than the safe lead level
- Strategy B with publication bias reports only studies which show a sample mean which is smaller than the safe lead level (regardless of significance)
Some descriptive plots to illustrate the behavior of the strategies
The estimated mean levels
Here are the compute mean levels in our 8 combinations (true value = 3):
Sampling plan | PubBias | Naive mean | Weighted mean |
---|---|---|---|
sequential | FALSE | 2.85 | 3.00 |
fixed | FALSE | 3.00 | 3.00 |
sequential | TRUE | 1.42 | 1.71 |
fixed | TRUE | 2.46 | 2.55 |
Concerning the sequential procedures described here, some authors have raised concerns that these procedures result in biased effect size estimates (e.g., Bassler et al., 2010, J. Kruschke, 2014). We believe these concerns are overstated, for at least two reasons.First, it is true that studies that terminate early at the H1 boundary will, on average, overestimate the true effect. This conditional bias, however, is balanced by late terminations, which will, on average, underestimate the true effect. Early terminations have a smaller sample size than late terminations, and consequently receive less weight in a meta-analysis. When all studies (i.e., early and late terminations) are considered together, the bias is negligible (Berry, Bradley, & Connor, 2010; Fan, DeMets, & Lan, 2004; Goodman, 2007; Schönbrodt et al., 2015). Hence, the sequential procedure is approximately unbiased overall.Second, the conditional bias of early terminations is conceptually equivalent to the bias that results when only significant studies are reported and non-significant studies disappear into the file drawer (Goodman, 2007). In all experimental designs –whether sequential, non-sequential, frequentist, or Bayesian– the average effect size inevitably increases when one selectively averages studies that show a larger-than-average effect size. Selective publishing is a concern across the board, and an unbiased research synthesis requires that one considers significant and non-significant results, as well as early and late terminations.Although sequential designs have negligible unconditional bias, it may nevertheless be desirable to provide a principled “correction” for the conditional bias at early terminations, in particular when the effect size of a single study is evaluated. For this purpose, Goodman (2007) outlines a Bayesian approach that uses prior expectations about plausible effect sizes. This approach shrinks extreme estimates from early terminations towards more plausible regions. Smaller sample sizes are naturally more sensitive to prior-induced shrinkage, and hence the proposed correction fits the fact that most extreme deviations from the true value are found in very early terminations that have a small sample size (Schönbrodt et al., 2015).
library(ggplot2)
library(dplyr)
library(htmlTable)
# set seed for reproducibility
set.seed(0xBEEF)
trueLevel <- 3 trueSD <- 2 safeLevel <- 2.7 maxN <- 50 minN <- 3 B <- 10000 # number of Monte Carlo simulations res <- data.frame() for (i in 1:B) { print(paste0(i, "/", B)) maxSample <- rnorm(maxN, trueLevel, trueSD) # optional stopping for (n in minN:maxN) { t0 <- t.test(maxSample[1:n], mu=safeLevel, alternative="less") #print(paste0("n=", n, "; ", t0$estimate, ": ", t0$p.value)) if (t0$p.value <= .05) break; } finalSample.seq <- maxSample[1:n] # now construct a matched fixed-n finalSample.fixed <- rnorm(n, trueLevel, trueSD) # --------------------------------------------------------------------- # save results in long format # sequential design res <- rbind(res, data.frame( id = i, type = "sequential", n = n, p.value = t0$p.value, selected = t0$p.value <= .05, empMean = mean(finalSample.seq) )) # fixed design res <- rbind(res, data.frame( id = i, type = "fixed", n = n, p.value = NA, selected = mean(finalSample.fixed) <= safeLevel, # some arbitrary publication bias selection empMean = mean(finalSample.fixed) )) } save(res, file="res.RData") # load("res.RData") # Figure 1: Sampling distribution ggplot(res, aes(x=n, y=empMean)) + geom_jitter(height=0, alpha=0.15) + xlab("Sample size") + ylab("Sample mean") + geom_hline(yintercept=trueLevel, color="red") + facet_wrap(~type) + theme_bw() # Figure 2: Individual study estimates ggplot(res, aes(x=empMean)) + geom_density() + xlab("Sample mean") + geom_vline(xintercept=trueLevel, color="red") + facet_wrap(~type) + theme_bw() # the mean estimate of all late terminations res %>% group_by(type) %>% filter(n==50) %>% summarise(lateEst = mean(empMean))
# how many strategy A studies were significant?
res %>% filter(type==”sequential”) %>% .[[“selected”]] %>% table()
# Compute estimated lead levels
est.noBias <- res %>% group_by(type) %>% dplyr::summarise(
bias = FALSE,
naive.mean = mean(empMean),
weighted.mean = weighted.mean(empMean, w=n)
)
est.Bias <- res %>% filter(selected==TRUE) %>% group_by(type) %>% dplyr::summarise(
bias = TRUE,
naive.mean = mean(empMean),
weighted.mean = weighted.mean(empMean, w=n)
)
est <- rbind(est.noBias, est.Bias) est # output a html table est.display <- txtRound(data.frame(est), 2, excl.cols=1:2) t1 <- htmlTable(est.display, header = c("Sampling plan", "PubBias", "Naive mean", "Weighted mean"), rnames = FALSE) t1 cat(t1) [/cc]
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.