[This article was first published on Wiekvoet, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
In this post I extend the multiplicative Bayesian sensory profiling model with effects for rounds and sessions. Is is not a difficult extension, but it brings the need for informative priors into the model. I do believe round and session effects exist, but, they are small. The Bayesian paradigm allows to employ small directly in the model.Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Rounds
This model envisions round effects as a way to model effects due to carry over and contrast. To explain these, envision a panelist tasting. He/she rinses to clean the mouth, a product is served and subsequently tasted. After scoring all the descriptors for the product a few minutes break is planned. In this break there is time for rinsing, maybe crackers or apples or another convenient product to clean the palate. After these few minutes the next product is served.
In the ideal situation, any residuals from the first product are removed when the second product is tasted. In the less ideal reality, this is not always so. Hence a product is influenced by the previous product, this is carry over. It is also possible, if the first product is very strong in taste and the second weak, that the panelist may feel to enhance the difference due to the contrast with the first product. Again, a property from the first product is influencing the second product. This is a way to get carry over effect. There are some variations there, but the core of the message is this. A product which is evaluated as second product, is evaluated differently than the first product. There are protocols such as rinsing to minimize this difference, there are experimental designs (Williams designs) to ensure these effects do not result is bias in product effects. In the end, however, they may still be there.
In the current model I have chosen to keep the model simple, and make the round effect a simple round effect, independent on products tasted and panelists which taste. This is obviously a simplification, but the model is complex enough as it is.
Implementation of a round effect
Given the way I explained the round effect, it follows that I see round effect as a perception effect. The consequence is that the round effect is subjected to similar scaling effects as the product effect. This leads to the following effect
fit[i] <- …+ (mProduct[Product[i]] + mRound[Round[i]])*sPanelist[Panelist[i]]The second part of the round effect is the hyperdistribution for mRound. Rather than making this uninformative, I was this to be informative, in order to suggest these effects are small. mRound is normal distributed with precision tauRound
mRound[i] ~ dnorm(0,tauRound)
The prior is on the distribution of tauRound. The associated standard deviation is from a half normal distribution with standard deviation 1/2 (precision 4, sd = 1/sqrt(precision)). Hence sdRound is larger than 0 and probably less than 1 (two standard deviations).
tauRound <- 1/(sdRound^2)
sdRound ~ dnorm(0,4) %_% T(0,)
The construction %_% T(0,) needs explanation. Basically T(0,) adapts the distribution to be larger than 0 lower than infinite. JAGS is happily using the format dnorm(0,4) T(0,). However since the model is written in R and stored as R model, this is an illegal statement: an operator is needed. Hence the dummy instruction %_%. R is happy and thinks %_% will be resolved at runtime. Subsequently write.model finds this operator and removes it before it writes the model to the temp file, so JAGS does not know it was there at some point. Everybody happy.
Interpretation of the round effect
In terms of interpretation, there are two aspects. First of all, what is the size of the specific rounds. If the first round has a clear effect, this may mean that an improved palate cleanser can improve the data. The second aspect is the size of the round effect as such. If the effect is large, then it is possible that the tasting protocol needs to be adapted. Both of these are a matter of judgement rather than science.Session effect
The session effect is a slightly different kind of beast as the round effect. In pure ANOVA terms, it states that scores of one day are different, lower or higher, than another day. Unless one thinks that a product changes between days, this is actually an improbable effect. It is much more probable that for each panelist one day is different from the second day. In terms of mixed models, panelist * day is a random effect. Given the way this is introduced, the effect is additive on top of the panelist effect. Hence it is not dependent on panelists scale usage.fit[i] <- … + mSessionPanelist[Session[i],Panelist[i]] + …
The prior for mSessionPanelist[ ] is the same as for mRound[ ].
Interpretation of the SessionPanelist effect is more simple than for rounds. There is not really much point in looking at each individual result, unless there is reason to suspect high variation for a specific panelist. Hence only the standard deviation is extracted from JAGS. A big standard deviation means that the panel is unsure and needs more training on the associated descriptor.
Results
Jags output shows a low value for n.eff and Rhat a bit larger than 1 for the variable sdSessionPanelist. It appears this parameter has some difficulty mixing and updating. This is also the reason why the number of iterations has been increased to 20000.It seems round is a bit more important than PanelistSession. The round effect is showing a large value for the first round, hence perhaps a palate cleanser might be in order. The second round has a similar negative effect, perhaps a contrast effect to the big effect of the first round.
Inference for Bugs model at “/tmp/RtmpJUFGM7/mijnmodel.txt”, fit using jags,
4 chains, each with 20000 iterations (first 10000 discarded), n.thin = 10
n.sims = 4000 iterations saved
mu.vect sd.vect 2.5% 25% 50% 75% 97.5% Rhat n.eff
Productdiff[1] 0.502 0.274 -0.022 0.318 0.501 0.684 1.052 1.001 4000
Productdiff[2] 2.109 0.288 1.542 1.924 2.110 2.294 2.681 1.001 4000
Productdiff[3] 1.607 0.282 1.059 1.419 1.610 1.796 2.159 1.001 4000
Productdiff[4] 0.657 0.272 0.132 0.478 0.651 0.842 1.202 1.001 4000
Productdiff[5] 0.155 0.268 -0.373 -0.026 0.153 0.337 0.674 1.001 4000
Productdiff[6] -1.452 0.273 -1.989 -1.639 -1.454 -1.268 -0.920 1.001 4000
Productdiff[7] 0.281 0.271 -0.252 0.100 0.279 0.465 0.806 1.001 4000
Productdiff[8] -0.222 0.270 -0.751 -0.400 -0.221 -0.037 0.313 1.001 4000
Productdiff[9] -1.829 0.283 -2.391 -2.018 -1.826 -1.643 -1.268 1.001 4000
Productdiff[10] -0.377 0.265 -0.897 -0.553 -0.381 -0.200 0.145 1.001 4000
Productdiff[11] 0.541 0.270 0.032 0.354 0.535 0.725 1.078 1.001 4000
Productdiff[12] 0.038 0.270 -0.482 -0.141 0.038 0.221 0.562 1.001 4000
Productdiff[13] -1.569 0.276 -2.114 -1.753 -1.573 -1.383 -1.021 1.001 4000
Productdiff[14] -0.117 0.266 -0.638 -0.300 -0.114 0.060 0.402 1.001 4000
Productdiff[15] 0.260 0.264 -0.255 0.080 0.257 0.436 0.795 1.001 4000
gsd 1.611 0.068 1.482 1.564 1.608 1.655 1.747 1.001 4000
mRound[1] 0.424 0.258 -0.063 0.251 0.409 0.585 0.956 1.001 4000
mRound[2] -0.417 0.264 -0.963 -0.582 -0.401 -0.241 0.060 1.001 3300
mRound[3] 0.272 0.250 -0.198 0.112 0.261 0.425 0.807 1.001 3300
mRound[4] -0.191 0.255 -0.737 -0.346 -0.179 -0.024 0.296 1.001 4000
mRound[5] -0.275 0.245 -0.796 -0.430 -0.262 -0.111 0.178 1.001 4000
mRound[6] 0.090 0.244 -0.406 -0.058 0.089 0.240 0.574 1.001 4000
meanProduct[1] 6.973 0.202 6.587 6.837 6.971 7.114 7.360 1.001 4000
meanProduct[2] 6.471 0.196 6.089 6.343 6.470 6.603 6.851 1.001 3500
meanProduct[3] 4.864 0.207 4.463 4.725 4.864 5.000 5.275 1.001 4000
meanProduct[4] 6.316 0.193 5.941 6.188 6.311 6.445 6.702 1.001 4000
meanProduct[5] 6.693 0.195 6.312 6.564 6.694 6.822 7.080 1.001 4000
meanProduct[6] 6.433 0.192 6.039 6.304 6.438 6.564 6.800 1.001 4000
sdPanelist 0.959 0.176 0.646 0.837 0.943 1.064 1.350 1.001 4000
sdProduct 0.896 0.393 0.427 0.638 0.804 1.049 1.903 1.001 4000
sdRound 0.425 0.170 0.167 0.307 0.401 0.512 0.830 1.002 3000
sdSessionPanelist 0.237 0.147 0.020 0.119 0.217 0.336 0.560 1.024 300
For each parameter, n.eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor (at convergence, Rhat=1).
Product differences are same as before. However, when the the round effect had a smaller precision and hence larger standard deviation, it was noted that product difference between 1 and 6 was not observed. It is a bit disturbing the effect is so clearly dependent on the prior employed.
Product means are slightly closer to each other than with the previous model, while the standard deviations are almost the same.
Mean SD Naive SE Time-series SE
choc1 6.973338 0.2018855 0.003192090 0.003270188
choc2 6.470882 0.1958250 0.003096265 0.003057404
choc3 4.863912 0.2069963 0.003272899 0.003379317
choc4 6.316028 0.1925660 0.003044736 0.003290025
choc5 6.692635 0.1949687 0.003082725 0.003631917
choc6 6.432695 0.1921969 0.003038899 0.003378925
Discussion
The beauty of building models in JAGS is revealed here. A few effects are added, but the structure is not overly influenced.
However, also the problems are revealed. The prior in round effects has influence on our opinion of product differences. There are two aspects to this. One of these is the undesired effect of trying to do hypothesis testing. A small change in data or model can push a parameter from one side of the brink to the other. Reality is that there is some indication an effect exists and the black and white line should not be drawn. The second aspect relates to model building itself. On the one hand it is undesirable that a change in prior has influence on the interpretation. On the other hand, this is what we do anyway. If a classical ANOVA is used, adding or removing a factor has similar effects on significance’s. Getting it from a prior is a novelty, but should not make much of a difference in looking at the data analysis.
A final discussion point is how to combine the round and session effects with the scale effect previously used. The current approach is to make the choice based on philosophical base rather than on data. While this is not ideal, I have the feeling this is not much of a problem. The round and session effects are small. Given the amount of data, model complexity and size of effects, I don’t even think it is possible to discriminate between model variations data based. So, I do not think this is a large problem.
R code
library(SensoMineR)library(coda)
library(R2jags)
FullContrast <- function(n) {
UseMethod(‘FullContrast’,n)
}
FullContrast.default <- function(n) stop(‘FullContrast only takes integer and factors’)
FullContrast.integer <- function(n) {
mygrid <- expand.grid(v1=1:n,v2=1:n)
mygrid <- mygrid[mygrid$v1<mygrid$v2,]
rownames(mygrid) <- 1:nrow(mygrid)
as.matrix(mygrid)
}
FullContrast.factor <- function(n) {
FullContrast(nlevels(n))
}
data(chocolates)
data_list <- with(sensochoc,list(Panelist = as.numeric(Panelist),
nPanelist = nlevels(Panelist),
Product = as.numeric(Product),
nProduct = nlevels(Product),
Round = Rank,
nRound = length(unique(Rank)),
Session = Session,
nSession = length(unique(Session)),
y=CocoaA,
N=nrow(sensochoc)))
data_list$Productcontr <- FullContrast(sensochoc$Product)
data_list$nProductcontr <- nrow(data_list$Productcontr)
model.file <- file.path(tempdir(),’mijnmodel.txt’)
mymodel <- function() {
# core of the model
for (i in 1:N) {
fit[i] <- grandmean + mPanelist[Panelist[i]] + mSessionPanelist[Session[i],Panelist[i]] +
(mProduct[Product[i]] + mRound[Round[i]]) * sPanelist[Panelist[i]]
#
y[i] ~ dnorm(fit[i],tau)
}
# grand mean and residual
tau ~ dgamma(0.001,0.001)
gsd <- sqrt(1/tau)
grandmean ~ dnorm(0,.001)
# variable Panelist distribution
for (i in 1:nPanelist) {
mPanelist[i] ~ dnorm(0,tauPanelist)
}
tauPanelist ~ dgamma(0.001,0.001)
sdPanelist <- sqrt(1/tauPanelist)
# Product distribution
for (i in 1:nProduct) {
mProduct[i] ~ dnorm(0,tauProduct)
}
tauProduct ~ dgamma(0.001,0.001)
sdProduct <- sqrt( 1/tauProduct)
for (i in 1:nRound) {
mRound[i] ~ dnorm(0,tauRound)
}
tauRound <- 1/(sdRound^2)
sdRound ~ dnorm(0,4) %_% T(0,)
for (i in 1:nSession) {
for (j in 1:nPanelist) {
mSessionPanelist[i,j] ~ dnorm(0,tauSessionPanelist)
}
}
tauSessionPanelist <- 1/(sdSessionPanelist^2)
sdSessionPanelist ~ dnorm(0,4) %_% T(0,)
#
# distribution of the multiplicative effect
for (i in 1:nPanelist) {
sPanelist[i] <- exp(EsPanelist[i])
EsPanelist[i] ~ dnorm(0,9)
}
# getting the interesting data
# true means for Panelist
for (i in 1:nPanelist) {
meanPanelist[i] <- grandmean + mPanelist[i] +
mean(mSessionPanelist[1:nSession,i]) +
( mean(mProduct[1:nProduct]) + mean(mRound[1:nRound]))*sPanelist[i]
}
# true means for Product
for (i in 1:nProduct) {
meanProduct[i] <- grandmean +
mean(mPanelist[1:nPanelist]) +
mean(mSessionPanelist[1:nSession,1:nPanelist]) +
(mProduct[i] + mean(mRound[1:nRound]) )*exp(mean(EsPanelist[1:nPanelist]))
}
for (i in 1:nProductcontr) {
Productdiff[i] <- meanProduct[Productcontr[i,1]]-meanProduct[Productcontr[i,2]]
}
}
write.model(mymodel,con=model.file)
inits <- function() list(
grandmean = rnorm(1,3,1),
mPanelist = c(0,rnorm(data_list$nPanelist-1)) ,
mProduct = c(0,rnorm(data_list$nProduct-1)) ,
EsPanelist = rnorm(data_list$nPanelist) ,
tau = runif(1,1,2),
tauPanelist = runif(1,1,3),
tauProduct = runif(1,1,3),
mRound = rnorm(data_list$nRound),
sdRound = abs(rnorm(1)),
mSessionPanelist= matrix(rnorm(data_list$nPanelist*data_list$nSession),nrow=data_list$nSession),
sdSessionPanelist = abs(rnorm(1))
)
#parameters <- c(‘sdPanelist’,’sdProduct’,’gsd’,’meanPanelist’,’meanProduct’,’Productdiff’,’sPanelist’,’sdRound’,’mRound’)
parameters <- c(‘sdPanelist’,’sdProduct’,’gsd’,’meanProduct’,’Productdiff’,’sdRound’,’mRound’,’sdSessionPanelist’)
jagsfit <- jags(data=data_list,inits=inits,model.file=model.file,parameters.to.save=parameters,n.chains=4,DIC=FALSE,n.iter=20000)
jagsfit
#plot(jagsfit)
jagsfit.mc <- as.mcmc(jagsfit)
#plot(jagsfit.mc)
fitsummary <- summary(jagsfit.mc)
# extract differences
Productdiff <- fitsummary$quantiles[ grep(‘Productdiff’,rownames(fitsummary$quantiles)),]
# extract differences different from 0
data_list$Productcontr[Productdiff[,1]>0 | Productdiff[,5]<0,]
# get the product means
ProductMean <- fitsummary$statistics[ grep(‘meanProduct’,rownames(fitsummary$quantiles)),]
rownames(ProductMean) <- levels(sensochoc$Product)
ProductMean
To leave a comment for the author, please follow the link and comment on their blog: Wiekvoet.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.