Site icon R-bloggers

Hierarchical Two Compartimental PK Model

[This article was first published on Wiekvoet, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
In this post I am running the Theoph dataset from MEMSS (Data sets and sample analyses from Pinheiro and Bates, “Mixed-effects Models in S and S-PLUS” (Springer, 2000)) in a JAGS model. To quote the MEMSS manual:
Boeckmann, Sheiner and Beal (1994) report data from a study by Dr. Robert Upton of the kinetics of the anti-asthmatic drug theophylline. Twelve subjects were given oral doses of theophylline then serum concentrations were measured at 11 time points over the next 25 hours.
These data are analyzed in Davidian and Giltinan (1995) and Pinheiro and Bates (2000) using a two-compartment open pharmacokinetic model, for which a self-starting model function, SSfol, is available.

Model

The model which is used in the example for the data is SSfol. Its value is:
Dose * exp(lKe+lKa-lCl) * (exp(-exp(lKe)*input) - exp(-exp(lKa)*input))
    / (exp(lKa) - exp(lKe))
With input the time variable. This looks very complex, but there seems to be a trick here. What is estimated is not Ke (elimination rate) but its natural logarithm. The same is true for Ka and CL. This is probably done to keep the parameters positive. So in terms of ordinary parameters:
Dose * (Ke+Ka)/CL * (exp(-Ke*input) - exp(-Ka*input)) / (Ka - Ke)
This looks much more simple.

Estimates using nls

The example code uses nls() to fit the model. Below the model is wrapped in an apply. In addition, the exponent is taken.
library(MEMSS)
la <- lapply(levels(Theoph$Subject), function(iSubject)
    {Theoph.i <- subset(Theoph, Subject == iSubject)
      fm1 <- nls(conc ~ SSfol(Dose, Time, lKe, lKa, lCl),
          data = Theoph.i)
      ee <- exp(coef(fm1))
      dd <- data.frame(Ke=ee[1],Ka=ee[2],CL=ee[3])
      row.names(dd) <- iSubject
      dd
    }
)
nlsres <- do.call(rbind,la)
nlsres
          Ke        Ka         CL
A 0.05395450 1.7774170 0.01992348
B 0.07396616 0.6955019 0.03244300
C 0.09812331 3.8490404 0.05724601
D 0.10557584 0.8328979 0.04199695
E 0.10166133 1.9426573 0.04476553
F 0.08142497 2.4535654 0.03955890
G 0.08746697 1.1714752 0.03739991
H 0.08843514 1.4715045 0.04360427
I 0.09952647 1.1637219 0.05113727
J 0.10224639 0.6797358 0.05159475
K 0.09195675 1.3755229 0.04646244

Model in JAGS

In JAGS there are more direct mechanisms to keeping parameters positive. However, there is one problem, an alternative set of parameters for subject A, Ka=0.053, Ke=1.77 does equally well. Combining MCMC samples from these two solutions just results in nonsense. To avoid this, I have placed the whole scaling in a parameter C0star, which is positive. In subsequent code individual CL and its hyper parameter are estimated from C0star. To keep this as a directed acyclic graph (DAG) additional variable z and parameters CLr are introduced. In addition, dose has been made a subject level variable, rather than associated with time points.

model1 <- function() {
  tau <- 1/pow(sigma,2)
  sigma ~ dunif(0,100)
  for (i in 1:n) {
    pred[i] <- c0star[subject[i]]*
        (exp(-ke[subject[i]]*time[i])-exp(-ka[subject[i]]*time[i]))
    conc[i] ~ dnorm(pred[i],tau)
  }
  for(i in 1:nsubject) {
    ka[i] ~ dlnorm(kam,katau) 
    ke[i] ~ dlnorm(kem,ketau) 
    c0star[i] ~ dunif(0,50)
# c0star=dose*((ke*ka)/CL)/(ka-ke)    
    CL[i] <- Dose[i]*ka[i]*ke[i]/((ka[i]-ke[i])*c0star[i])
    CLr[i] <- CL[i]-CLm
    z[i] ~ dnorm(CLr[i],CLtau)
  }
  kam ~ dunif(-10,10) 
  katau <- 1/pow(kasd,2) 
  kasd ~ dunif(0,100)
  Kam <- exp(kam)
#  ta0_5 <- log(2)/Kam
  
  kem ~ dunif(-10,10)
  ketau <- 1/pow(kesd,2)
  kesd ~ dunif(0,100)
  Kem <- exp(kem)
#  te0_5 <- log(2)/Kem
  
  CLm ~ dunif(0,100)
  CLtau <- 1/pow(CLsd,2)
  CLsd ~dunif(0,100)

The usual code for running the model. I have noticed quite some sensitivity to inits while writing this post. 
library(R2jags)
datain <-  list(
    time=Theoph$Time,
    conc=Theoph$conc,
    n=nrow(Theoph),
    subject =c(1:nlevels(Theoph$Subject))[Theoph$Subject],
    nsubject = nlevels(Theoph$Subject),
    z=rep(0, nlevels(Theoph$Subject)),
    Dose=unique(subset(Theoph,, c(Subject,Dose)))$Dose
)
parameters <- c(‘Kem’,’Kam’,’ke’,’ka’,’CL’,’CLm’)
inits <- function() {
  list(kam = rnorm(1,log(1),.3),
      ka=   rnorm(12,1,.2), 
      kem=   rnorm(1,log(.08),.01),
      ke=    rnorm(12,.08,.01),
      c0star = runif(12,9,11)
  )}

jagsfit <- jags(datain, model=model1, 
    inits=inits,
    parameters=parameters,progress.bar=”gui”,
    n.iter=10000,
    n.chains=4,n.thin=2)

Results

It seems the estimated parameters are close to the parameters from nls(). This is for this blog post sufficient. The data has quite some shrinkage, almost all subjects have Ke close 0.86. CL and Ka also have shrinkage, but much less than Ke.
Inference for Bugs model at “C:/Users/…/modeld8059bf37f0.txt”, fit using jags,
 4 chains, each with 10000 iterations (first 5000 discarded), n.thin = 2
 n.sims = 10000 iterations saved
         mu.vect sd.vect    2.5%     25%     50%     75%   97.5%  Rhat n.eff
CL[1]      0.026   0.002   0.022   0.025   0.026   0.028   0.030 1.003  1100
CL[2]      0.035   0.002   0.031   0.034   0.035   0.037   0.039 1.001  4200
CL[3]      0.051   0.004   0.044   0.048   0.051   0.053   0.058 1.002  2300
CL[4]      0.038   0.002   0.034   0.037   0.038   0.040   0.044 1.006   460
CL[5]      0.041   0.003   0.036   0.039   0.041   0.043   0.047 1.005   620
CL[6]      0.041   0.003   0.035   0.039   0.041   0.042   0.046 1.002  2200
CL[7]      0.037   0.003   0.033   0.036   0.037   0.039   0.043 1.003   990
CL[8]      0.043   0.003   0.038   0.041   0.043   0.044   0.048 1.004   960
CL[9]      0.046   0.004   0.039   0.044   0.046   0.048   0.053 1.002  1600
CL[10]     0.046   0.003   0.040   0.044   0.046   0.049   0.053 1.002  1600
CL[11]     0.044   0.003   0.038   0.042   0.044   0.046   0.051 1.002  2000
CL[12]     0.033   0.002   0.029   0.031   0.033   0.034   0.038 1.007   430
CLm        0.040   0.003   0.035   0.038   0.040   0.042   0.046 1.004  1200
Kam        1.673   0.450   0.998   1.386   1.616   1.885   2.700 1.001  6400
Kem        0.086   0.005   0.076   0.082   0.085   0.089   0.096 1.010   420
ka[1]      1.482   0.205   1.127   1.338   1.466   1.607   1.934 1.001  9300
ka[2]      0.656   0.089   0.507   0.593   0.649   0.710   0.854 1.006   510
ka[3]      3.964   0.860   2.657   3.372   3.836   4.407   5.973 1.001  5700
ka[4]      0.954   0.125   0.730   0.869   0.948   1.032   1.219 1.009   310
ka[5]      2.110   0.325   1.555   1.882   2.080   2.302   2.827 1.003  1100
ka[6]      2.385   0.441   1.671   2.073   2.336   2.637   3.411 1.002  2200
ka[7]      1.208   0.171   0.910   1.092   1.195   1.309   1.584 1.002  2000
ka[8]      1.502   0.183   1.171   1.376   1.493   1.615   1.890 1.003  1100
ka[9]      1.287   0.245   0.878   1.112   1.261   1.429   1.834 1.002  1600
ka[10]     0.787   0.139   0.551   0.690   0.774   0.871   1.094 1.004   910
ka[11]     1.446   0.248   1.021   1.274   1.422   1.593   2.006 1.002  2400
ka[12]     8.409   5.846   4.277   5.818   7.078   8.998  20.996 1.001  7100
ke[1]      0.080   0.008   0.061   0.075   0.081   0.085   0.093 1.004   990
ke[2]      0.084   0.007   0.069   0.080   0.084   0.089   0.099 1.003  1100
ke[3]      0.085   0.007   0.072   0.081   0.085   0.089   0.101 1.002  2500
ke[4]      0.089   0.009   0.076   0.083   0.088   0.094   0.110 1.017   210
ke[5]      0.089   0.008   0.076   0.083   0.087   0.093   0.109 1.007   440
ke[6]      0.085   0.007   0.071   0.081   0.085   0.089   0.100 1.003  1300
ke[7]      0.086   0.007   0.073   0.082   0.086   0.090   0.104 1.006   530
ke[8]      0.086   0.007   0.073   0.081   0.085   0.090   0.101 1.008   420
ke[9]      0.086   0.008   0.071   0.081   0.085   0.090   0.103 1.005   970
ke[10]     0.086   0.008   0.071   0.081   0.085   0.090   0.104 1.009   480
ke[11]     0.086   0.008   0.072   0.081   0.085   0.090   0.103 1.005   630
ke[12]     0.088   0.008   0.075   0.083   0.087   0.092   0.106 1.008   340
deviance 200.433   9.015 184.713 193.957 199.825 206.049 220.328 1.002  2200

For each parameter, n.eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor (at convergence, Rhat=1).

DIC info (using the rule, pD = var(deviance)/2)
pD = 40.6 and DIC = 241.0
DIC is an estimate of expected predictive error (lower deviance is better).

To leave a comment for the author, please follow the link and comment on their blog: Wiekvoet.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.