PK analysis: Theoph again
[This article was first published on Wiekvoet, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
This week I wanted to repeat the Theoph PK analysis of two weeks ago in Stan. It also suddenly dawned on me. For a non-linear model in classical statistics we think it normal to provide good initializations. In contrast, for those MCMC samples I remember reading to start with highly dispersed initial values. So, for non-linear models in Bayesian context, we got two contrasting views on initialization. From my experience, I probably should abandon those highly dispersed initializations as they do not lead to convergence.Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Model and data
The data is the same as two weeks ago. The only difference is that I realized that a dose in mg/kg should be converted to total dose in order to get a sensible idea of clearance (CL). The model then, is a slightly modified version, and runs through Stan.The problem with this model is not so much the formulation, but rather, as indicated, the initialization. En even then, it seems to be getting unacceptable samples at a high rate. I don’t think this is a critique of the model. The estimates seem quite reasonable.
library(MEMSS)
library(ggplot2)
library(rstan)
Theoph$dose <- Theoph$Dose*Theoph$Wt
datain <- list(
Time=Theoph$Time,
conc=Theoph$conc,
n=nrow(Theoph),
subject =c(1:nlevels(Theoph$Subject))[Theoph$Subject],
nsubject = nlevels(Theoph$Subject),
dose=unique(subset(Theoph,, c(Subject,dose)))$dose
)
smodel <- '
data {
int
int
int
real dose[nsubject];
real Time[n];
real conc[n];
}
parameters {
real
real
real
real
real lke;
real lka;
real
real kesd;
real kasd;
real CLsd;
}
transformed parameters {
real pred[n];
real
for (i in 1:nsubject) {
c0star[i] <- dose[i]*((kes[i]*kas[i])/CLs[i])/
(kas[i]-kes[i]);
}
for (i in 1:n) {
pred[i] <- c0star[subject[i]]*
(exp(-kes[subject[i]]*Time[i])-exp(-kas[subject[i]]*Time[i]));
}
}
model {
conc ~ normal(pred,sdr);
kes ~ lognormal(lke,kesd);
kas ~ lognormal(lka,kasd);
lke ~ uniform(-3,3);
lka ~ uniform(-3,3);
CL ~ uniform(0.01,300);
CLs ~ normal(CL,CLsd);
kesd ~ uniform(0.01,2);
kasd ~ uniform(0.01,2);
CLsd ~ uniform(0.01,10);
}
generated quantities {
real Ka;
real Ke;
Ka <- exp(lka);
Ke <- exp(lke);
}’
# (kes[i]
data = datain,
pars=c(‘CL’,’kes’,’kas’,’CLs’,
‘c0star’,’Ka’,’Ke’),
init= function() {
list(lka = rnorm(1,log(2),.3),
kas= rnorm(12,2,.2),
lke= rnorm(1,log(.08),.01),
kes= rnorm(12,.08,.01),
CLs = rnorm(12,1,.1),
CL = rnorm(1,1,.1),
kesd=runif(1,0.04,0.1),
kasd=runif(1,0.2,2),
sdr= runif(1,1,3),
CLsd=runif(1,.1,1)
)})
fstan
Inference for Stan model: smodel.
4 chains, each with iter=2000; warmup=1000; thin=1;
post-warmup draws per chain=1000, total post-warmup draws=4000.
mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
CL 2.80 0.01 0.20 2.41 2.66 2.80 2.92 3.23 990 1.01
kes[1] 0.08 0.00 0.01 0.06 0.07 0.08 0.09 0.09 324 1.01
kes[2] 0.08 0.00 0.01 0.07 0.08 0.09 0.09 0.10 904 1.01
kes[3] 0.09 0.00 0.01 0.07 0.08 0.09 0.09 0.10 1314 1.00
kes[4] 0.09 0.00 0.01 0.08 0.09 0.09 0.10 0.12 675 1.01
kes[5] 0.09 0.00 0.01 0.08 0.08 0.09 0.09 0.11 995 1.00
kes[6] 0.09 0.00 0.01 0.07 0.08 0.09 0.09 0.10 914 1.00
kes[7] 0.09 0.00 0.01 0.07 0.08 0.09 0.09 0.10 1231 1.00
kes[8] 0.09 0.00 0.01 0.07 0.08 0.09 0.09 0.11 1326 1.01
kes[9] 0.08 0.00 0.01 0.07 0.08 0.08 0.09 0.10 1085 1.01
kes[10] 0.09 0.00 0.01 0.07 0.08 0.09 0.09 0.11 1129 1.00
kes[11] 0.09 0.00 0.01 0.07 0.08 0.09 0.09 0.11 1044 1.01
kes[12] 0.09 0.00 0.01 0.07 0.08 0.09 0.09 0.10 835 1.01
kas[1] 1.48 0.01 0.22 1.08 1.32 1.46 1.61 1.94 338 1.01
kas[2] 0.66 0.00 0.09 0.51 0.60 0.66 0.72 0.86 1262 1.00
kas[3] 4.00 0.03 0.90 2.67 3.40 3.87 4.41 6.16 1214 1.00
kas[4] 0.95 0.00 0.12 0.73 0.87 0.94 1.03 1.21 1476 1.00
kas[5] 2.12 0.01 0.32 1.56 1.90 2.10 2.34 2.78 1021 1.01
kas[6] 2.41 0.02 0.47 1.58 2.08 2.37 2.67 3.42 570 1.00
kas[7] 1.21 0.00 0.17 0.90 1.10 1.20 1.31 1.58 1277 1.00
kas[8] 1.50 0.01 0.18 1.17 1.39 1.50 1.61 1.87 739 1.01
kas[9] 1.27 0.01 0.24 0.88 1.10 1.26 1.41 1.81 1108 1.00
kas[10] 0.80 0.00 0.13 0.57 0.71 0.79 0.88 1.10 1264 1.00
kas[11] 1.45 0.01 0.24 1.03 1.27 1.43 1.60 1.98 1243 1.01
kas[12] 8.14 0.13 4.24 4.31 5.79 7.22 8.84 18.78 1001 1.00
CLs[1] 2.07 0.01 0.17 1.71 1.97 2.09 2.17 2.36 462 1.01
CLs[2] 2.07 0.00 0.13 1.80 1.98 2.06 2.15 2.34 1187 1.00
CLs[3] 3.38 0.01 0.25 2.91 3.21 3.38 3.54 3.91 860 1.00
CLs[4] 2.37 0.01 0.16 2.09 2.26 2.35 2.46 2.74 635 1.00
CLs[5] 2.99 0.01 0.22 2.61 2.84 2.97 3.11 3.46 819 1.00
CLs[6] 2.88 0.01 0.21 2.48 2.75 2.87 3.01 3.32 785 1.00
CLs[7] 2.73 0.01 0.19 2.37 2.60 2.73 2.84 3.11 1020 1.01
CLs[8] 2.39 0.01 0.16 2.09 2.28 2.38 2.49 2.73 680 1.02
CLs[9] 3.60 0.01 0.30 3.04 3.40 3.59 3.78 4.23 1085 1.00
CLs[10] 3.07 0.01 0.23 2.62 2.92 3.06 3.21 3.57 810 1.01
CLs[11] 3.15 0.01 0.23 2.70 2.99 3.15 3.30 3.62 944 1.01
CLs[12] 2.82 0.01 0.21 2.42 2.69 2.81 2.95 3.27 889 1.00
c0star[1] 12.85 0.07 0.84 11.26 12.26 12.84 13.45 14.45 148 1.01
c0star[2] 15.18 0.04 1.36 12.74 14.28 15.15 15.93 18.21 1225 1.00
c0star[3] 8.43 0.01 0.49 7.52 8.10 8.40 8.74 9.49 1415 1.01
c0star[4] 13.76 0.04 1.15 11.92 13.02 13.58 14.30 16.57 873 1.00
c0star[5] 9.96 0.02 0.62 8.78 9.57 9.91 10.32 11.32 1128 1.00
c0star[6] 9.87 0.03 0.63 8.73 9.43 9.85 10.27 11.25 603 1.00
c0star[7] 10.99 0.02 0.76 9.64 10.50 10.93 11.45 12.57 1373 1.00
c0star[8] 12.54 0.03 0.74 11.20 12.02 12.50 12.99 14.11 720 1.00
c0star[9] 8.08 0.02 0.68 6.82 7.62 8.06 8.46 9.45 793 1.01
c0star[10] 10.24 0.02 1.01 8.53 9.57 10.19 10.77 12.61 1730 1.00
c0star[11] 9.37 0.02 0.69 8.13 8.92 9.32 9.76 10.87 1187 1.00
c0star[12] 8.40 0.01 0.40 7.63 8.14 8.39 8.65 9.22 1757 1.00
Ka 1.63 0.02 0.42 0.95 1.34 1.58 1.87 2.57 516 1.01
Ke 0.09 0.00 0.01 0.08 0.08 0.09 0.09 0.10 516 1.02
lp__ 17.34 1.53 10.27 0.41 10.66 16.13 22.63 45.26 45 1.07
Samples were drawn using NUTS(diag_e) at Sun Apr 19 09:57:07 2015.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at
convergence, Rhat=1).
Plot
This is just what I used last week, adapted for Stan. In addition, to get correct scale, mean dose in mg has been used in the calculations.stansampls <- extract(fstan,c('CL','Ke','Ka'))
Time <- seq(1,max(datain$Time))
la2 <- sapply(1:nrow(stansampls$CL),
function(i) {
CL <- stansampls$CL[i]
Ka <- stansampls$Ka[i]
Ke <- stansampls$Ke[i]
c0star <- mean(datain$dose)*((Ke*Ka)/CL)/(Ka-Ke)
y<- c0star* (exp(-Ke*Time)-exp(-Ka*Time))
})
res2 <- data.frame(
Time=Time,
Conc=apply(la2,1,mean),
C05=apply(la2,1,FUN=function(x) quantile(x,0.05)),
C95=apply(la2,1,FUN=function(x) quantile(x,0.95)))
ggplot(res2, aes(x=Time)) +
geom_line(aes(y=Conc))+
scale_y_log10(‘theophylline (mg/L)’)+
geom_line(aes(y=conc,x=Time,col=Subject),alpha=1,data=Theoph)+
geom_ribbon(aes(ymin=C05, ymax=C95),alpha=.2)+
theme(legend.position=’bottom’)
To leave a comment for the author, please follow the link and comment on their blog: Wiekvoet.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.