Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
This post will deal with applying the constant-volatility procedure written about by Barroso and Santa Clara in their paper “Momentum Has Its Moments”.
The last two posts dealt with evaluating the intelligence of the signal-generation process. While the strategy showed itself to be marginally better than randomly tossing darts against a dartboard and I was ready to reject it for want of moving onto better topics that are slightly less of a toy in terms of examples than a little rotation strategy, Brian Peterson told me to see this strategy through to the end, including testing out rule processes.
First off, to make a distinction, rules are not signals. Rules are essentially a way to quantify what exactly to do assuming one acts upon a signal. Things such as position sizing, stop-loss processes, and so on, all fall under rule processes.
This rule deals with using leverage in order to target a constant volatility.
So here’s the idea: in their paper, Pedro Barroso and Pedro Santa Clara took the Fama-French momentum data, and found that the classic WML strategy certainly outperforms the market, but it has a critical downside, namely that of momentum crashes, in which being on the wrong side of a momentum trade will needlessly expose a portfolio to catastrophically large drawdowns. While this strategy is a long-only strategy (and with fixed-income ETFs, no less), and so would seem to be more robust against such massive drawdowns, there’s no reason to leave money on the table. To note, not only have Barroso and Santa Clara covered this phenomena, but so have others, such as Tony Cooper in his paper “Alpha Generation and Risk Smoothing Using Volatility of Volatility”.
In any case, the setup here is simple: take the previous portfolios, consisting of 1-12 month momentum formation periods, and every month, compute the annualized standard deviation, using a 21-252 (by 21) formation period, for a total of 12 x 12 = 144 trials. (So this will put the total trials run so far at 24 + 144 = 168…bonus points if you know where this tidbit is going to go).
Here’s the code (again, following on from the last post, which follows from the second post, which follows from the first post in this series).
require(reshape2) require(ggplot2) ruleBacktest <- function(returns, nMonths, dailyReturns, nSD=126, volTarget = .1) { nMonthAverage <- apply(returns, 2, runSum, n = nMonths) nMonthAverage <- xts(nMonthAverage, order.by = index(returns)) nMonthAvgRank <- t(apply(nMonthAverage, 1, rank)) nMonthAvgRank <- xts(nMonthAvgRank, order.by=index(returns)) selection <- (nMonthAvgRank==5) * 1 #select highest average performance dailyBacktest <- Return.portfolio(R = dailyReturns, weights = selection) constantVol <- volTarget/(runSD(dailyBacktest, n = nSD) * sqrt(252)) monthlyLeverage <- na.omit(constantVol[endpoints(constantVol), on ="months"]) wts <- cbind(monthlyLeverage, 1-monthlyLeverage) constantVolComponents <- cbind(dailyBacktest, 0) out <- Return.portfolio(R = constantVolComponents, weights = wts) out <- apply.monthly(out, Return.cumulative) return(out) } t1 <- Sys.time() allPermutations <- list() for(i in seq(21, 252, by = 21)) { monthVariants <- list() for(j in 1:12) { trial <- ruleBacktest(returns = monthRets, nMonths = j, dailyReturns = sample, nSD = i) sharpe <- table.AnnualizedReturns(trial)[3,] monthVariants[[j]] <- sharpe } allPermutations[[i]] <- do.call(c, monthVariants) } allPermutations <- do.call(rbind, allPermutations) t2 <- Sys.time() print(t2-t1) rownames(allPermutations) <- seq(21, 252, by = 21) colnames(allPermutations) <- 1:12 baselineSharpes <- table.AnnualizedReturns(algoPortfolios)[3,] baselineSharpeMat <- matrix(rep(baselineSharpes, 12), ncol=12, byrow=TRUE) diffs <- allPermutations - as.numeric(baselineSharpeMat) require(reshape2) require(ggplot2) meltedDiffs <-melt(diffs) colnames(meltedDiffs) <- c("volFormation", "momentumFormation", "sharpeDifference") ggplot(meltedDiffs, aes(x = momentumFormation, y = volFormation, fill=sharpeDifference)) + geom_tile()+scale_fill_gradient2(high="green", mid="yellow", low="red") meltedSharpes <- melt(allPermutations) colnames(meltedSharpes) <- c("volFormation", "momentumFormation", "Sharpe") ggplot(meltedSharpes, aes(x = momentumFormation, y = volFormation, fill=Sharpe)) + geom_tile()+scale_fill_gradient2(high="green", mid="yellow", low="red", midpoint = mean(allPermutations))
Again, there’s no parallel code since this is a relatively small example, and I don’t know which OS any given instance of R runs on (windows/linux have different parallelization infrastructure).
So the idea here is to simply compare the Sharpe ratios with different volatility lookback periods against the baseline signal-process-only portfolios. The reason I use Sharpe ratios, and not say, CAGR, volatility, or drawdown is that Sharpe ratios are scale-invariant. In this case, I’m targeting an annualized volatility of 10%, but with a higher targeted volatility, one can obtain higher returns at the cost of higher drawdowns, or be more conservative. But the Sharpe ratio should stay relatively consistent within reasonable bounds.
So here are the results:
Sharpe improvements:
In this case, the diagram shows that on a whole, once the volatility estimation period becomes long enough, the results are generally positive. Namely, that if one uses a very short estimation period, that volatility estimate is more dependent on the last month’s choice of instrument, as opposed to the longer-term volatility of the system itself, which can create poor forecasts. Also to note is that the one-month momentum formation period doesn’t seem overly amenable to the constant volatility targeting scheme (there’s basically little improvement if not a slight drag on risk-adjusted performance). This is interesting in that the baseline Sharpe ratio for the one-period formation is among the best of the baseline performances. However, on a whole, the volatility targeting actually does improve risk-adjusted performance of the system, even one as simple as throwing all your money into one asset every month based on a single momentum signal.
Absolute Sharpe ratios:
In this case, the absolute Sharpe ratios look fairly solid for such a simple system. The 3, 7, and 9 month variants are slightly lower, but once the volatility estimation period reaches between 126 and 252 days, the results are fairly robust. The Barroso and Santa Clara paper uses a period of 126 days to estimate annualized volatility, which looks solid across the entire momentum formation period spectrum.
In any case, it seems the verdict is that a constant volatility target improves results.
Thanks for reading.
NOTE: while I am currently consulting, I am always open to networking, meeting up (Philadelphia and New York City both work), consulting arrangements, and job discussions. Contact me through my email at ilya.kipnis@gmail.com, or through my LinkedIn, found here.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.