[This article was first published on Minding the Brain, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
I had an email exchange with Jeff Malins, who asked several questions about growth curve analysis. I often get questions of this sort and Jeff agreed to let me post excerpts from our (email) conversation. The following has been lightly edited for clarity and to be more concise.
Jeff asked:
Jeff asked:
I’ve fit some curves for accuracy data using both linear and logistic approaches and in both versions, one of the conditions acts strangely. As is especially evident in the linear plots, the green line is not a line! Is this an issue with the fitted() function you’ve come across before? Or is this is a signal something is amiss with the model?I answered:
In the logistic model, some curvature is reasonable because the model is linear on the logit scale, but that is curved when projected back to the proportions scale. Since all of the model fits look curved for the logistic model, that seems like a reasonable explanation.Unequal number of trials turned out to be part of the problem, which Jeff fixed, then followed up with a few more questions:
I am not sure what is going wrong in your linear model, but one possibility is that it is some odd consequence of unequal numbers of observations (if that is even relevant here).
(1) If I create a first-order orthogonal time term and then use this in the model (ot1 in your code), my understanding is this is centered in the distribution as opposed to starting at the origin. So for linear models fit using ot1, an intercept term to me seems to be indexing global amplitude differences in the model fits (translation in the y-direction) rather than a y-intercept. Is this correct?
(2) My understanding is that one only needs to generate orthogonal time terms if fitting second order models or higher. However, I performed a logistic GCA which was first order and it failed to converge when I used my raw time variable and only converged when I transformed it to an orthogonal polynomial with the same number of steps.
(3) I am unclear as to when to include a “1” in the random effects structure for conditions nested within subjects. For example, what is the difference between (1+ot1 | Subject:Condition) and (ot1 | Subject:Condition)? I have the former in the linear GCA and the latter in the logistic GCA.My answers:
(1) Yes, that is correct. I might quibble slightly with your terminology and say that you’ve moved the origin to the center of your time window rather than the baseline time point, but the concept is the same. Also, I find that having the intercept represent the overall average is often a helpful property because traditional “area under the curve” analyses are then represented by the intercept term.
(2) Well, centering the linear term does affect interpretation of the intercept, which is sometimes worth doing. However, I suspect you were thinking about de-correlating the time terms, in which case you are correct, that only matters when there are multiple time terms (starting with second-order polynomials). Your point about convergence is a slightly trickier issue. Convergence can be touchy and generally works better when the predictors are on similar scales. Raw time variables typically go from 0 to 10 or 20, but other predictors are often 0/1, so there is an order of magnitude difference there. The orthogonal linear time term should be in the -1 to 1 range, which can help with convergence.
(3) There is no difference between those two random effect definitions: the “random intercepts” (1 | …) are included by default even if you omit the 1. Sometimes I include the 1 to make my code more transparent when I am teaching about GCA, but I almost never use it in my own code. Including the 1 can also make it easier to think about how to de-correlate random effects, but that’s probably too tangential for now.
To leave a comment for the author, please follow the link and comment on their blog: Minding the Brain.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.