Site icon R-bloggers

Monetary Policy & Credit Easing pt. 8: Econometrics Tests in R

[This article was first published on The Dancing Economist, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Hello, folks its time to cover some important econometrics tests you can do in R.

The Akaike information criterion is a measure of the relative goodness of fit of a statistical model.  If you have 10 models and order them by AIC, the one with the smallest AIC is your best model, ceterus paribus.
The following code can figure the AIC and a similar version called BIC:



> AIC(srp1.gls)
[1] 100.7905


> BIC(srp1.gls)
[1] 140.7421

Say we wish to see if our model has an error term that follows a relatively normal distribution. For this we can perform the Jarque-Bera which tests kurtosis as well as skewness. This function requires that you load the FitAR package.


> JarqueBeraTest(srp1.gls$res[-(1)])
$LM
[1] 19.2033


$pvalue
[1] 6.761719e-05

To see if the mean of the residual values is 0 and to see the standard deviation the following code works:


> mean(srp1.gls$res[-(1)])
[1] 0.003354243
> sd(srp1.gls$res[-(1)])
[1] 0.3666269

Other tests like the Breusch-Pagan and Goldfeld-Quandt provide facts like wether autocorrelation is present and give us a hint as to wether our residual variance is stable or not. In order for these to work you have to load the lmtest package. Also you can only run these for the lm objects or for your Ordinary Least Squares Regressions for any Generalized Least Squares regressions you’ll have to perform these test manually, and if you know of an easier or softer way please share.


> bptest(srp1.lm)


studentized Breusch-Pagan test


data:  srp1.lm 
BP = 48.495, df = 12, p-value = 2.563e-06


> gqtest(srp1.lm)


Goldfeld-Quandt test


data:  srp1.lm 
GQ = 0.1998, df1 = 40, df2 = 40, p-value = 1


You can also use the Durbin-Watson to test for first order autocorrelation:


> dwtest(srp1.lm)


Durbin-Watson test


data:  srp1.lm 
DW = 1.4862, p-value = 0.0001955
alternative hypothesis: true autocorrelation is greater than 0 

Wish to get confidence intervals for your parameter estimates? Then use the confint() function as shown below for the Generalized Least Squares regression on long-term risk premia from 2001-2011.


> confint(p2lrp.gls)< o:p>
                                    2.5 %                               97.5 %< o:p>
yc                                -0.1455727340         0.1498852728< o:p>
default                      0.2994818014             1.0640354237< o:p>
Volatility                   0.0336077958             0.0617798767< o:p>
CorporateProfit      -0.0010916473             0.0006628209< o:p>
FF                               -0.1788624533             0.0931406285< o:p>
ER                              0.0001539035                        0.0016060804< o:p>
Fedmbs                      -0.0061554994             0.0085638593< o:p>
Support                     -0.1499342096             0.1615652273< o:p>
FedComm                     -0.0108567077             0.0750407328< o:p>
FedGdp                      -0.1347070955             0.2528217710< o:p>
ForeignDebt                 -0.0441198164             0.1042805549< o:p>
govcredit                    0.1090847204             0.6796839003< o:p>
FedBalance                  -2.0940925835             0.0370114069< o:p>
UGAP                        -0.4821566147             0.3188891550< o:p>
OGAP                        -0.2239749029             0.1073611677< o:p>

Another nice feature is finding the log-likelihood of your estimation:

> logLik(lrp2.lm)
‘log Lik.’ 23.05106 (df=17)

Want to see if you have a unit-root in your residual values? Then perform the augmented Dickey-Fuller. For this you’ll have to load the ‘tseries’ package.

> adf.test(lrp2.gls$res[-(1:4)])< o:p>

                              Augmented Dickey-Fuller Test< o:p>

data:  lrp2.gls$res[-(1:4)] < o:p>
Dickey-Fuller = -7.4503, Lag order = 3, p-value = 0.01< o:p>
alternative hypothesis: stationary < o:p>

Warning message:< o:p>
In adf.test(lrp2.gls$res[-(1:4)]) : p-value smaller than printed p-value< o:p>
> adf.test(lrp2.lm$res)< !--EndFragment-->


< !--EndFragment-->

I hope this mini-series has been informative to all that tuned in. For more info on anything you see here please don’t be shy to comment and keep dancin’,

Steven J.
To leave a comment for the author, please follow the link and comment on their blog: The Dancing Economist.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.