Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
In the February 2015 edition of The Journal of Finance, a well known academic paper, “The Pre-FOMC Announcement Drift”, was finally published, almost 4 years after the working paper was released in the public domain in 2011.
Authored by researchers, Lucca and Moench, at the US Federal Reserve, it documents the tendency for the S&P500 Index to rise in the 24 hours prior to scheduled FOMC announcements. However, perhaps somewhat surprisingly, no such pattern is observed for interest rate markets like Eurodollar rates or 10-year Treasuries. The paper can be downloaded from here http://onlinelibrary.wiley.com/doi/10.1111/jofi.12196/abstract.
Since it’s been public information for some time now, I thought it would be interesting to check whether the drift continues. In this post, we’ll first try to reproduce the essential result of their work (in-sample) and then look at the post-public domain results (out-of-sample).
Note, we won’t be trying to exactly replicate their research, but will use other interesting quant techniques nonetheless.
Data
Lucca and Moench used intra-day S&P500 data (2pm on the day prior to the announcement to 2pm on the day of the announcement) to…
…show that since 1994, the S&P500 Index has on average increased 49 basis points in the 24-hours before scheduled FOMC announcements.
As we don’t have ready access to such data, we’ll use daily close data for the S&P500 ETF (SPY) as a proxy. You could think of this as also checking the robustness of their result, by shifting the return calculation from 2pm to 4pm.
In addition, their post-1994 sample was conducted from September 1994 to March 2011, but since we have both the SPY price data and the FOMC dates (from my previous post here http://www.returnandrisk.com/2015/01/fomc-dates-full-history-web-scrape.html), we’ll define in-sample as the period from January 1994 to March 2011, and out-of-sample as April 2011 to January 2015.
Also, since we can easily get risk-free rate data from the Kenneth French Data Library http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html via Quandl, we’ll conduct our study using excess returns i.e. simple SPY percent return less risk-free rate of return.
Here is the R code to setup and pre-process the required data:
################################################################################ # install packages and load them # ################################################################################ install.packages(c("quantmod", "Quandl", "boot", "eventstudies", "compute.es", "pwr", "stargazer"), repos = "http://cran.us.r-project.org") library(quantmod) library(Quandl) library(boot) library(eventstudies) library(stargazer) library(compute.es) library(pwr) ################################################################################ # get data # ################################################################################ # download csv file of FOMC announcement dates from previous post into working # directory, and read into R download.file( "https://docs.google.com/uc?export=download&id=0B4oNodML7SgSMlZxYW4yWTZabGs", "fomcdatesall.csv", "curl") fomcdatesall <- read.csv("fomcdatesall.csv", colClasses = c(rep("Date", 2), rep("numeric", 2), rep("character", 2)), stringsAsFactors = FALSE) # get S&P500 ETF prices getSymbols("SPY", from = "1994-01-01", to = "2015-02-28") # get fama french factors from quandl FF <- Quandl("KFRENCH/FACTORS_D", type="xts") # get daily risk-free rate of return i.e. "The Tbill return is the simple daily # rate that, over the number of trading days in the month, compounds to 1-month # TBill rate from Ibbotson and Associates, Inc." rets.rf <- FF["1994-01-01/2015-02-28", "RF"] / 100 # calculate SPY 1-day simple returns (close-to-close) rets <- cbind(ROC(SPY[, "SPY.Adjusted"], type = "discrete")[-1], rets.rf[-1]) rets$RF <- na.locf(rets$RF) # no Feb 2015 data at time of writing # calculate SPY excess return over risk-free tbill return xs.rets <- rets[, 1] - rets[, 2] names(xs.rets) <- "xs.ret" ################################################################################ # create in-sample data - Jan 1994 to Mar 2011 # ################################################################################ ins.dates <- subset(fomcdatesall, begdate > "1994-01-01" & begdate < as.Date("2011-03-31") & scheduled == 1, select = c(begdate, enddate)) # get in-sample SPY 1-day excess simple returns (close-to-close) ins.xs.rets <- xs.rets["/2011-03-31"] # get returns on fomc dates ins.xs.fomcrets <- ins.xs.rets[ins.dates$enddate, ] # get returns on nonfomc dates ins.xs.nonfomcrets <- ins.xs.rets[-which(index(ins.xs.rets) %in% ins.dates$enddate), ] ################################################################################ # create out-of-sample data - Apr 2011 to Jan 2015 # ################################################################################ oos.dates <- subset(fomcdatesall, begdate > as.Date("2011-03-31") & begdate < as.Date("2015-02-28") & scheduled == 1, select = c(begdate, enddate)) # get out-of-sample SPY 1-day excess simple returns (close-to-close) oos.xs.rets <- xs.rets["2011-04-01/2015-02-28"] # get returns on fomc dates oos.xs.fomcrets <- oos.xs.rets[oos.dates$enddate, ] # get returns on nonfomc dates oos.xs.nonfomcrets <- oos.xs.rets[-which(index(oos.xs.rets) %in% oos.dates$enddate), ]
In-sample Analysis: Jan 1994 – Mar 2011
Lucca and Moench conducted what is known as an event study or empirical research method that enables one to study the impact of an event (e.g. FOMC announcements) on prices/returns (e.g. S&P500 Index). So let’s do something similar…
Event Study
There’s an R package, eventstudies, which has some handy functions to do event studies, so we’ll use it and then create a chart to visualize the findings for the 138 scheduled FOMC meetings from January 1994 to March 2011. Below is the R code snippet for the event study:
################################################################################ # in-sample event study # ################################################################################ # event study function plot.es <- function(dates, returns, window = 5) { # prepare data for event study ins.events <- data.frame(unit = names(returns), when = dates[, "enddate"], stringsAsFactors = FALSE) # map returns to event time e.g. event time index = 0 is mapped to calendar # end date of fomc meeting rets.evt <- phys2eventtime(z = returns, events = ins.events, width = window) # get 5-day window of returns either side of end date of fomc meeting rets.window <- window(x = rets.evt$z.e, start = -window, end = window) # calculate cumulative return over entire window rets.cum <- remap.cumprod(rets.window, is.pc = FALSE, is.returns = TRUE, base = 1) mean.rets.cum <- (rowMeans(rets.cum, na.rm = TRUE) - 1) * 100 # plot event study chart plot(-window:window, mean.rets.cum, type = "l", lwd = 1, xlab = "Days Relative to Announcement Date", ylab = "Cumulative Excess Returns (%)", main = paste("Cumulative SPY Excess Returns Around FOMC Announcements"), xaxt = "n") axis(1, at = seq(-window, window, by = 1)) points(-window:window, mean.rets.cum) text(-window:window, mean.rets.cum, round(mean.rets.cum, 2), adj = c(-0.25, 1), cex = 0.7) abline(v = 0, h = 0) abline(v = -1, lty = 2, col = "blue") } plot.es(ins.dates, ins.xs.rets, 5)
The event study chart below clearly shows the pop in the SPY returns in the 1-day period before the FOMC announcement, which in our setup is about +0.33% (0.52% – 0.19%). This is the key finding of the paper. In addition, there is a noticeable drift higher on T-1 of +0.20% (0.19% – -0.01%).
Summary Statistics: FOMC and Non-FOMC Dates
Here are the in-sample summary statistics and a plot of the FOMC excess returns.
## Summary Statistics for In-sample FOMC Dates ## ============================================ ## Statistic N Mean St. Dev. Min Max ## -------------------------------------------- ## xs.ret 138 0.0033 0.0117 -0.0276 0.0471 ## -------------------------------------------- ## ## Summary Statistics for In-sample Non-FOMC Dates ## ============================================== ## Statistic N Mean St. Dev. Min Max ## ---------------------------------------------- ## xs.ret 4,205 0.0002 0.0127 -0.0985 0.1452 ## ----------------------------------------------
Statistical Significance Test
We’ll check first whether the FOMC returns are normally distributed using the Shapiro-Wilk normality test and looking at a Q-Q plot.
## Shapiro-Wilk normality test ## ## data: coredata(ins.xs.fomcrets) ## W = 0.967, p-value = 0.002021
As suspected they’re not e.g. fat tails, so we’ll test for statistical significance using a bootstrap confidence interval (CI) for the mean return and see whether it contains zero or not.
Based on the various CI’s calculated by the boot package, none of which contain zero, the FOMC returns are statistically significant. For example, the studentized CI is 0.15% to 0.53%.
## BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS ## Based on 999 bootstrap replicates ## ## CALL : ## boot.ci(boot.out = boot.mean, type = c("all")) ## ## Intervals : ## Level Normal Basic Studentized ## 95% ( 0.0014, 0.0051 ) ( 0.0014, 0.0051 ) ( 0.0015, 0.0053 ) ## ## Level Percentile BCa ## 95% ( 0.0014, 0.0051 ) ( 0.0013, 0.0051 ) ## Calculations and Intervals on Original Scale
Difference in Means Test by Bootstrap
In addition, I thought it would be interesting to test whether there is a statistical difference between returns on FOMC dates and non-FOMC dates. We’ll again use bootstrap resampling to do such a test. Here is the R code for this bootstrap and the R output.
################################################################################ # in-sample difference in means test by bootstrap # ################################################################################ # difference in means function for bootstrap diff.means <- function(d, f) { n <- nrow(d) idx1 <- 1:table(as.numeric(d$fomc))[2] idx2 <- seq(length(idx1) + 1, n) m1 <- sum(d[idx1,1] * f[idx1])/sum(f[idx1]) m2 <- sum(d[idx2,1] * f[idx2])/sum(f[idx2]) ss1 <- sum(d[idx1,1]^2 * f[idx1]) - (m1^2 * sum(f[idx1])) ss2 <- sum(d[idx2,1]^2 * f[idx2]) - (m2^2 * sum(f[idx2])) c(m1 - m2, (ss1 + ss2)/(sum(f) - 2)) } # create stratified in-sample data for bootstrap ins.xs.rets.boot <- data.frame(rets = c(coredata(ins.xs.fomcrets), coredata(ins.xs.nonfomcrets)), fomc = c(rep(1, nrow(ins.xs.fomcrets)), rep(0, nrow(ins.xs.nonfomcrets)))) # perform bootstrap set.seed(1) (boot.diffmean <- boot(ins.xs.rets.boot, diff.means, R = 999, stype = "f", strata = ins.xs.rets.boot[,2])) # get 95% confidence interval using the studentized method boot.ci(boot.diffmean, type = c("stud")) # compare with Student's t-Test (for reference) t.test(ins.xs.fomcrets, ins.xs.nonfomcrets) ## STRATIFIED BOOTSTRAP ## ## ## Call: ## boot(data = ins.xs.rets.boot, statistic = diff.means, R = 999, ## stype = "f", strata = ins.xs.rets.boot[, 2]) ## ## ## Bootstrap Statistics : ## original bias std. error ## t1* 0.0030971019 3.046648e-05 1.022256e-03 ## t2* 0.0001602753 1.229334e-07 8.573936e-06 ## BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS ## Based on 999 bootstrap replicates ## ## CALL : ## boot.ci(boot.out = boot.diffmean, type = c("stud")) ## ## Intervals : ## Level Studentized ## 95% ( 0.0011, 0.0050 ) ## Calculations and Intervals on Original Scale
The difference in means is +0.31% and the 95% confidence interval is +0.11% to +0.50% (i.e. does not contain zero) and hence we can conclude that there is a statistically significant difference between SPY returns on FOMC and non-FOMC dates, a la Lucca and Moench – all good so far…
Out-of-sample Analysis: April 2011 to January 2015
Let’s now do out-of-sample testing to see whether the pattern has continued to hold up after the working paper was in the public domain.
Event Study
There are 31 scheduled FOMC meetings from April 2011 to January 2015.
The out-of-sample chart below continues to show a positive drift in SPY returns in the 1-day period before the FOMC announcement, but is less pronounced at about 0.25% (0.32% – 0.07%). Also, the previous positive return on T-1 is now negative 0.16% (0.07% – 0.23%).
Summary Statistics: FOMC and Non-FOMC Dates
Here are the out-of-sample summary statistics and a plot of the FOMC excess returns.
## Summary Statistics for Out-of-sample FOMC Dates ## =========================================== ## Statistic N Mean St. Dev. Min Max ## ------------------------------------------- ## xs.ret 31 0.0026 0.0137 -0.0295 0.0466 ## ------------------------------------------- ## ## Summary Statistics for Out-of-sample Non-FOMC Dates ## ============================================ ## Statistic N Mean St. Dev. Min Max ## -------------------------------------------- ## xs.ret 952 0.0005 0.0096 -0.0651 0.0449 ## --------------------------------------------
Statistical Significance Test
For the 31 out-of-sample FOMC returns, the bootstrapped studentized 95% CI is -0.21% to 0.82% (i.e. contains zero), so we now can’t reject the null hypothesis that these returns are equal to zero.
## BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS ## Based on 999 bootstrap replicates ## ## CALL : ## boot.ci(boot.out = boot.mean1, type = c("all")) ## ## Intervals : ## Level Normal Basic Studentized ## 95% (-0.0021, 0.0074 ) (-0.0023, 0.0071 ) (-0.0021, 0.0082 ) ## ## Level Percentile BCa ## 95% (-0.0018, 0.0076 ) (-0.0015, 0.0083 ) ## Calculations and Intervals on Original Scale
Difference in Means Test by Bootstrap
Similarly, the bootstrapped difference in means 95% CI of -0.25% to +0.69%, indicates we also cannot reject the null hypothesis that there is no difference between SPY excess returns on FOMC and non-FOMC dates at the 5% level.
## STRATIFIED BOOTSTRAP ## ## Call: ## boot(data = oos.xs.rets.boot, statistic = diff.means, R = 999, ## stype = "f", strata = oos.xs.rets.boot[, 2]) ## ## ## Bootstrap Statistics : ## original bias std. error ## t1* 2.117564e-03 -1.125171e-04 2.416125e-03 ## t2* 9.424895e-05 1.024752e-07 8.292720e-06 ## BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS ## Based on 999 bootstrap replicates ## ## CALL : ## boot.ci(boot.out = boot.diffmean1, type = c("stud")) ## ## Intervals : ## Level Studentized ## 95% (-0.0025, 0.0069 ) ## Calculations and Intervals on Original Scale
Required Out-of-sample Size
The results above could be due to small sample size. So how large an out-of-sample would we need for statistical significance if the drift does indeed persist? Thinking out of my finance box and into the clinical research box, we can get a rough estimate if we’re prepared to make some assumptions e.g normality. This is a common problem faced in clinical trial design but by using previous study results (e.g. effect size) an answer can be derived.
In our case, by using the in-sample FOMC and non-FOMC returns, conducting a standard Welch two sample t-test to get the effect size (i.e. the strength of the difference in the FOMC and non-FOMC returns), and with some trial and error, we get a ball park estimate of 120 FOMC meetings or 15 years (120 / 8 meetings per year)!
Conclusion
For the in-sample period with 138 FOMC meetings, we were able to derive a similar result of an average pre-FOMC announcement drift of +0.33% using close-to-close data (compared to Lucca and Moench’s +0.49% using 2pm-2pm data). Furthermore, there was a statistically significant difference in the mean returns on FOMC and non-FOMC days.
Out-of-sample, the picture is less clear. Based on the much smaller sample size of 31 meetings, there is still a positive drift (+0.25%) using close-to-close returns. However, we can’t conclude now that this or the difference in mean returns on FOMC and non-FOMC days is statistically significant at the 5% level.
A few possible reasons for the second finding:
- small out-of-sample size, and so only (a very long) time will tell if the drift persists…
- if the pattern has indeed weakened after publication of the working paper, a possible explanation is that the market is semi-strong form efficient (defined at Wikipedia, http://en.wikipedia.org/wiki/Efficient-market_hypothesis#Semi-strong-form_efficiency, as “…prices adjust to publicly available new information very rapidly and in an unbiased fashion, such that no excess returns can be earned by trading on that information.”
- the pattern exists, but is time-varying…
- the FOMC has become more transparent since Lucca and Moench discovered the pattern, and hence there is less uncertainty (risk) regarding monetary policy decisions, which leads to a lower risk premium for such events. See http://www.frbsf.org/education/publications/doctor-econ/2012/august/transparency-lessons-financial-crisis e.g.
- “April 27, 2011: Added Chairman’s press conference to the release of projections.”
- “January 25, 2012: Provided a statement of longer-run goals and monetary policy strategy including a 2% inflation target. Fed policymakers began reporting their short-term interest rate projections over the next few years.”
So if you’re of the conservative inclination, you’ll proceed with caution. On the other hand, if you’re game like Richard Branson, you’ll say with glee “Screw it, let’s do trade it!”
Click here for the complete R code on GitHub.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.