Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Models for individual subjects are becoming increasingly popular in psychological research. One reason is that it is difficult to make inferences from between-person data to within-person processes. Another is that time series obtained from individuals are becoming increasingly available due to the ubiquity of mobile devices. The central goal of so-called idiographic modeling is to tap into the within-person dynamics underlying psychological phenomena. With this goal in mind many researchers have set out to analyze the multivariate dependencies in within-person time series. The most simple and most popular model for such dependencies is the first-order Vector Autoregressive (VAR) model, in which each variable at the current time point is predicted by (a linear function of) all variables (including itself) at the previous time point.
A key assumption of the standard VAR model is that its parameter do not change over time. However, often one is interested in exactly such changes over time. For example, one could be interested in relating changes in parameters with other variables, such as changes in a person’s environment. This could be a new job, the seasons, or the impact of a global pandemic. In less exploratory designs, one could examine which impact certain interventions (e.g., medication or therapy) have on the interactions between symptoms.
In this blog post I give a very brief overview of how to estimate a time-varying VAR model with the kernel smoothing approach, which we discussed in this recent tutorial paper. This method is based on the assumption that parameters can change smoothly over time, which means that parameters cannot “jump” from one value to another. I then focus on how to estimate and analyze this type of time-varying VAR models with the R-package mgm.
Estimating time-varying Models via Kernel Smoothing
The core idea of the kernel smoothing approach is the following: We choose equally spaced time points across the duration of the whole time series and then estimate “local” models at each of those time points. All local models taken together then constitute the time-varying model. With “local” models we mean that these models are largely based on time points that are close to the time point at hand. This is achieved by weighting observations accordingly during parameter estimation. This idea is illustrated for a toy data set in the following Figure:
Here we only illustrate estimating the local model at $t=3$. We see the 10 time points of this time series on the left panel. The column $w_{t_e=3}$ in red indicates a possible set of weights we could use to estimate the local model at $t=3$: the data at time points close to $t=3$ get the highest weight, and time points further away get an increasingly small weight. The function that defines these weights is shown on the right panel. The blue column in the left panel, and the corresponding blue function on the right indicate another possible weighting. Using this weighting, we combine fewer observations close in time. This allows us to detect more “time-varyingness” in the parameters, because we smooth over less time points. On the other hand, however, we use less data, which makes our estimates less reliable. It is therefore important to choose a weighting function that strikes a good balance between sensitivity to “time-varyingness” and stable estimates. In the method presented here we use a Gaussian weighting function (also called a kernel) which is defined by its standard deviation (or bandwidth). We will return to how to select a good bandwidth parameter below.
In this blog post I focus on how to estimate time-varying models with the R-package mgm. For a more detailed explanation of the method see our recent tutorial paper.
Loading & Inspecting the Data
To illustrate estimating time-varying VAR models, I use an ESM time series of 12 mood related variables that are measured up to 10 times a day for 238 consecutive days (for details about this dataset see Kossakowski et al. (2017)). The questions are “I feel relaxed”, “I feel down”, “I feel irritated”, “I feel satisfied”, “I feel lonely”, “I feel anxious”, “I feel enthusiastic”, “I feel suspicious”, “I feel cheerful”, “I feel guilty”, “I feel indecisive”, and “I feel strong”. Each question is answered on a 7-point Likert scale ranging from “not” to “very”.
The data set is loaded with the mgm-package. We first subset the 12 mood variables:
We see that the data set has 1476 observations:
time_data
contains temporal information about each measurement. We will make use of the day on which the measurement occured (dayno
), the measurement prompt (beepno
) and the overall time stamp (time_norm
).
Selecting the optimal Bandwidth
One way of selecting a good bandwidth parameter is to fit time-varying models with different candidate bandwidth parameters on a training data set, and evaluate their prediction error on a test data set. The function bwSelect()
implements such a bandwith selection scheme. Here we do not show the specification of this function, because it has the same input arguments as the tvmvar()
which we describe in a moment below plus a candidate sequence of bandwidth values and some specifications for how to split the data into training and test data. In addition, data driven bandwidth selection can take a considerable amount of time to run, which would not allow you to run the code while reading the blog post. For this tutorial, we therefore just fix the bandwidth to the value that was returned by bwSelect()
. However, you can find the code to perform bandwidth selection with bwSelect()
on the present data set here).
Estimating time-varying VAR Models
We can now specify the estimation of the time-varying VAR model. We provide the data as input and we specify the type of variables and how many categories they have with the type
and level
arguments. In our example data sets all variables are continuous, and we therefore set type = rep("g", 12)
for continuous-Gaussian, and set the number of categories to 1 by convention. We choose to select the regularization parameters with cross-validation with lambdaSel = "CV"
, and we specify that the VAR model should include a single lag with lags = 1
. The arguments beepvar
and dayvar
provide the day and the number of notification on a given day for each measurement, which is necessary to specify the VAR design matrix. In addition, we provide the time stamps of all measurements with timepoints = time_data$time_norm
to account for missing measurements. Note however, that we still assume a constant lag size of 1. The time stamps are only used to ensure that the weighting indeed gives those time points the highest weight that are closest to the current estimation point (for details see Section 2.5 in this paper). So far, the specification is the same as for the mvar()
function which fits stationary mixed VAR models.
For the time-varying model, we need to specify two additional arguments. First, with estpoints = seq(0, 1, length = 20)
we specify that we would like to estimate 20 local models across the duration of the entire time series (which is normalized to [0,1]). The number of estimation points can be chosen arbitrarily large, but at some point adding more estimation point is not worth the additional computational costs, because subsequent local models are essentially identical. Finally, we specify the bandwidth with the bandwidth
argument.
We can paste the output object into the console
which provides a summary of the model and also shows how many rows were in the VAR design matrix (876) compared to the number of time points in the data set (1476). The former number is lower, because a VAR(1) model can only be estimated if for a given time point also the time point 1 lag earlier is available. This is not the case for the first measurement on a given day or if there are missing responses during the day.
Computing Time-varying Prediction Errors
Similarly to stationary VAR models, we can compute prediction errors. This can be done with the predict()
function, which takes the model object, the data, and two variables indicating the day number and notification number. Providing the data and the notification variables independently from the model object allows to compute prediction errors for new samples.
The argument errorCon = c("R2", "RMSE")
specifies that the proportion of explained variance ($R^2$) and the Root Mean Squared Error (RMSE) should be returned as prediction errors. The final argument tvMethod
specifies how time-varying prediction errors should be calculated. The option tvMethod = "closestModel"
makes predictions for a time point using the local model that is closest to it. The option chosen here, tvMethod = "weighted"
, provides a weighted average of the predictions of all local models, weighted using the weighting function centered on the location of the time point at hand. Typically, both methods give very similar results.
The main output are the following two objects:
pred_obj$tverrors
is a list that includes the estimation errors for each estimation point / local model; pred_obj$errors
contains the average error across estimation points.
Visualizing parts of the Model
The time-varying VAR(1) model consists of $(p + p^2) \times E$ parameters, where $p$ is the number of variables and $E$ is the number of estimation points. Visualizing all parameters at once is therefore challenging. Instead, one can pick the parameters that are of most interest for the research question at hand. Here, we choose two different visualizations. First, we use the qgraph()
function from the R-package qgraph to inspect the VAR interaction parameters at estimation points 1, 10, and 20:
We see that some parameters in the VAR models are varying considerably over time. For example, the autocorrelation effect of Relaxed seems to be decreasing over time, the positive effect of Strong on Satisfied only appears at estimation point 20, and also the negative effect of Satisfied on Guilty only appears at estimation point 20.
We can zoom in on these individual parameters by plotting them as a function of time:
We see that the effect of Relaxed on itself on the next time point is relatively strong at the beginning of the time series, but then decreases towards zero and remains zero from around estimation point 13. The cross-lagged effect of Strong on Satisfied on the next time point is equal to zero until around estimation point 9 but then seems to increase monotonically. Finally, the cross-lagged effect of Satisfied on Guilty is also equal to zero until around estimation point 13 and then decreases monotonically.
Stability of Estimates
Similar to stationary models, one can assess the stability of time-varying parameters using boostrapped sampling distributions. The mgm package allows one to do that with the resample()
function. Here you can find code on how to do that for the example in this tutorial.
Time-varying or not?
Clearly, “time-varyingness” is a continuum that goes from stationary to extremely time-varying. However, in some cases it might be necessary to decide whether the parameters of a VAR model are reliably time-varying. To reach such a decision one can use a hypothesis test with the null hypothesis that the model is not time-varying. Here is one way to perform such a hypothesis test: One begins by fitting a stationary VAR model to the data and then repeatedly simulates data from this estimated model. For each of these simulated time series data sets, one computes a pooled prediction error for the time-varying model. The distribution of these prediction errors serves as the sampling distribution of prediction errors under the null hypothesis. Now one can compute the pooled estimation error of the time-varying VAR model on the empirical data and use it as a test-statistic. This test is explained in more detailed here, and the code to implement this test for the data set used in this tutorial can be found here.
Summary
In this blog post, I have shown how to estimate a time-varying VAR model with a kernel smoothing approach, which is based on the assumption that all parameters are a smooth function of time. In addition to estimating the model, we discussed the selection of an appropriate bandwidth parameter, how to compute (time-varying) prediction errors, and how to visualize different aspects of the model. Finally, I provided pointers to code that shows how to assess the stability of estimates via bootstrapping, and how to perform a hypothesis test one can use to select between stationary and time-varying VAR models.
I would like to thank Fabian Dablander for his feedback on this blog post.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.