Bacteria and Alzheimer’s disease: I just need to know if ten patients are enough
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
You can guarantee that when scientists publish a study titled:
a newspaper will publish a story titled:
Without access to the paper, it’s difficult to assess the evidence. I suggest you read Jonathan Eisen’s analysis of the abstract. Essentially, it makes two claims:
- that cultured astrocytes (a type of brain cell) can adsorb and internalize lipopolysaccharide (LPS) from Porphyromonas gingivalis, a bacterium found in the mouth
- that LPS was also detected in brain tissue from 4/10 Alzheimer’s disease (AD) cases, but not in tissue from 10 matched normal brains
Regardless of the biochemistry – which does not sound especially convincing to me[1] – how about the statistics?
LPS was detected in 0/10 normal brains, compared with 4/10 AD brains. The “tl;dr” version of this discussion – if you think that those look like rather small numbers, you’re correct.
We can set up a matrix in R to contain those values.
ad <- matrix(c(4, 6, 0, 10), nrow = 2) colnames(ad) <- c("AD", "Norm") rownames(ad) <- c("lps+", "lps-") ad # AD Norm # lps+ 4 0 # lps- 6 10
Are those proportions significantly different? Or to put it another way: “I just need to know if 3 (10) patients are enough.” Let’s talk about statistical power.
Without going too deeply into the mathematics, the power of a statistical test is a number between 0 and 1, which describes the probability of a type II (false negative) error. For example, when power = 0.8, the probability of a false negative (concluding no difference between groups when in fact, there is one) is 0.2.
We’re looking at proportions in two groups (a two-proportion test), where the power depends on several parameters:
- sample size (n, per group)
- the proportions in each group (p1 and p2)
- probability of type I (false positive) error (sig.level)
- whether the test is one- or two-sided (alternative)
R provides us with the function power.prop.test():
Usage: power.prop.test(n = NULL, p1 = NULL, p2 = NULL, sig.level = 0.05, power = NULL, alternative = c("two.sided", "one.sided"), strict = FALSE)
How it works: you set one of the parameters n, p1, p2, sig.level or power to NULL and it is calculated from the other parameters.
To get started – what’s the power of the study in the publication, using sig.level = 0.05?
ppt <- power.prop.test(n = 10, p1 = 0, p2 = 0.4) ppt$power # [1] 0.6250675
Effectively, what that means is that the probability of a false negative (concluding no difference in LPS detection between normal and AD brains when there is a difference) is about 0.375. That’s rather high.
Most researchers set power = 0.8 as an acceptable threshold. So – how many samples per group do we need to achieve power = 0.8 at sig.level = 0.05?
ppt <- power.prop.test(p1 = 0, p2 = 0.4, power = 0.8) ppt$n # [1] 14.45958
About 15 samples. Not many more than 10 – but more than 10, nevertheless. How about something more stringent: power = 0.9, sig.level = 0.01?
ppt <- power.prop.test(p1 = 0, p2 = 0.4, power = 0.9, sig.level = 0.01) ppt$n # [1] 27.16856
Note that with larger sample sizes (e.g. 100 per group), the proportion of normal brains containing LPS can be quite high compared with AD brains and still be significant (at p = 0.05):
ppt <- power.prop.test(p2 = 0.4, power = 0.8, n = 100) ppt$p1 # [1] 0.2180086
Finally, a plot showing the increase of power with group sample size at p = 0.05, p = 0.01 (click for larger version):
library(ggplot2) # quick, dirty, ugly, but it works df1 <- data.frame(n = 10:30, p = sapply(10:30, function(x) power.prop.test(p1 = 0, p2 = 0.4, n = x)$power), s = 0.05) df2 <- data.frame(n = 10:30, p = sapply(10:30, function(x) power.prop.test(p1 = 0, p2 = 0.4, n = x, sig.level = 0.05)$power), s = 0.05) df3 <- rbind(df1, df2) ggplot(df3) + geom_point(aes(n, p, color = factor(s))) + theme_bw()
So much for power. How about testing for a difference between the groups?
You might be tempted to reach for the two-proportion test, implemented in R as prop.test(). You should not – but here’s the result anyway:
prop.test(t(ad)) 2-sample test for equality of proportions with continuity correction data: c(0, 4) out of c(10, 10) X-squared = 2.8125, df = 1, p-value = 0.09353 alternative hypothesis: two.sided 95 percent confidence interval: -0.803636315 0.003636315 sample estimates: prop 1 prop 2 0.0 0.4 Warning message: In prop.test(c(0, 4), c(10, 10)) : Chi-squared approximation may be incorrect
Note the warning message. That’s telling you that we don’t have enough samples for the calculation to be informative. The so-called Cochran conditions stipulate that no cell should contain a count of zero and more than 80% of cells should have counts of at least 5. Some power calculators, such as this one, will tell you when these assumptions have been violated.
The alternative is Fisher’s exact test:
fisher.test(ad) Fisher's Exact Test for Count Data data: ad p-value = 0.08669 alternative hypothesis: true odds ratio is not equal to 1 95 percent confidence interval: 0.7703894 Inf sample estimates: odds ratio Inf
In summary then: not only is the conclusion that “LPS from periodontal bacteria can access the AD brain during life” rather premature, but this study “lacks power”. We cannot say whether LPS detection in the AD brains is significantly different to that in normal brains.
[1] and I used to be a biochemist (D. Phil., 1997)
Filed under: publications, R, science news, statistics Tagged: alzheimers, bacteria, lps, power, significance
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.