lmer vs INLA for variance components
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Just for fun, I decided to compare the estimates from lmer and INLA for the variance components of an LMM (this isn’t really something that you would ordinarily do – comparing frequentist and bayesian approaches). The codes are below. A couple of plots are drawn, which show the distribution of the hyperparameters (in this case variances) from INLA, which are difficult to get from the frequentist framework (there’s a link to a presentation by Douglas Bates in the code, detailing why you might not want to do it [distribution is not symmetrical], and how you could do it… but it’s a lot of work).
Note that we’re comparing the precision (tau) rather than the variance or SD… SD = 1/sqrt(tau)
# | |
# Compare lmer and inla for LMM | |
# largely taken from Spatial and spatio-temporal bayesian models with R-INLA (Blangiardo & Cameletti, 2015), section 5.4.2 | |
# | |
m <- 10000 # N obs | |
set.seed(1234) | |
x <- rnorm(m) | |
group <- sample(seq(1, 100), size = m, replace = TRUE) | |
# simulate random intercept | |
tau.ri <- .25 | |
1/sqrt(tau.ri) #SD | |
set.seed(4567) | |
v <- rnorm(length(unique(group)), 0, sqrt(1/tau.ri)) | |
# assign random intercept to individuals | |
vj <- v[group] | |
# simulate y | |
tau <- 3 | |
1/sqrt(tau) #SD | |
set.seed(8910) | |
b0 <- 5 | |
beta1 <- 2 | |
y <- rnorm(m, b0 + beta1*x + vj, 1/sqrt(tau)) | |
library(lme4) | |
mod <- lmer(y ~ x + (1|group)) | |
summary(mod) | |
vc <- VarCorr(mod) | |
library(INLA) | |
form <- y ~ x + f(group, model = "iid", param = c(1, 5e-5)) | |
imod <- inla(form, family = "gaussian", | |
data = data.frame(y = y, x = x, group = group)) | |
summary(imod) | |
cbind(truth = c(tau, tau.ri), lmer = 1/c(attr(vc, "sc")^2, unlist(vc)), inla = imod$summary.hyperpar$`0.5quant`) | |
plot(imod, | |
plot.fixed.effects = F, | |
plot.lincomb = F, | |
plot.random.effects = F, | |
plot.predictor = F, | |
plot.prior = TRUE) | |
plot(imod$marginals.hyperpar$`Precision for the Gaussian observations`, type = "l") | |
# the equivalent of this for lmer is not easy to get at all. One would have to profile the deviance function (see http://lme4.r-forge.r-project.org/slides/2009-07-16-Munich/Precision-4.pdf) | |
lo <- imod$summary.hyperpar$`0.025quant`[1] | |
hi <- imod$summary.hyperpar$`0.975quant`[1] | |
dd <- imod$marginals.hyperpar$`Precision for the Gaussian observations` | |
dd <- dd[dd[,1] >= lo & dd[,1] <= hi, ] | |
dd <- rbind(dd, c(hi, 0)) | |
dd <- rbind(dd, c(lo, 0)) | |
polygon(dd, col = "blue") | |
plot(imod$marginals.hyperpar$`Precision for group`, type = "l") | |
lo <- imod$summary.hyperpar$`0.025quant`[2] | |
hi <- imod$summary.hyperpar$`0.975quant`[2] | |
dd <- imod$marginals.hyperpar$`Precision for group` | |
dd <- dd[dd[,1] >= lo & dd[,1] <= hi, ] | |
dd <- rbind(dd, c(hi, 0)) | |
dd <- rbind(dd, c(lo, 0)) | |
polygon(dd, col = "blue") | |
As you’d hope, the results come pretty close to each other and the truth:
cbind(truth = c(tau, tau.ri), lmer = 1/c(attr(vc, "sc")^2, unlist(vc)), inla = imod$summary.hyperpar$`0.5quant`) truth lmer inla 3.00 2.9552444 2.9556383 group 0.25 0.2883351 0.2919622
Code on Github…
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.