The Quarterly Tactical Strategy (aka QTS)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
This post introduces the Quarterly Tactical Strategy, introduced by Cliff Smith on a Seeking Alpha article. It presents a variation on the typical dual-momentum strategy that only trades over once a quarter, yet delivers a seemingly solid risk/return profile. The article leaves off a protracted period of unimpressive performance at the turn of the millennium, however.
First off, due to the imprecision of the English language, I received some help from TrendXplorer in implementing this strategy. Those who are fans of Amibroker are highly encouraged to visit his blog.
In any case, this strategy is fairly simple:
Take a group of securities (in this case, 8 mutual funds), and do the following:
Rank a long momentum (105 days) and a short momentum (20 days), and invest in the security with the highest composite rank, with ties going to the long momentum (that is, .501*longRank + .499*shortRank, for instance). If the security with the highest composite rank is greater than its three month SMA, invest in that security, otherwise, hold cash.
There are two critical points that must be made here:
1) The three-month SMA is *not* a 63-day SMA. It is precisely a three-point SMA up to that point on the monthly endpoints of that security.
2) Unlike in flexible asset allocation or elastic asset allocation, the cash asset is not treated as a formal asset.
Let’s look at the code. Here’s the data–which are adjusted-data mutual fund data (although with a quarterly turnover, the frequent trading constraint of not trading out of the security is satisfied, though I’m not sure how dividends are treated–that is, whether a retail investor would actually realize these returns less a hopefully tiny transaction cost through their brokers–aka hopefully not too much more than $1 per transaction):
require(quantmod) require(PerformanceAnalytics) require(TTR) #get our data from yahoo, use adjusted prices symbols <- c("NAESX", #small cap "PREMX", #emerging bond "VEIEX", #emerging markets "VFICX", #intermediate investment grade "VFIIX", #GNMA mortgage "VFINX", #S&P 500 index "VGSIX", #MSCI REIT "VGTSX", #total intl stock idx "VUSTX") #long term treasury (cash) getSymbols(symbols, from="1990-01-01") prices <- list() for(i in 1:length(symbols)) { prices[[i]] <- Ad(get(symbols[i])) } prices <- do.call(cbind, prices) colnames(prices) <- gsub("\.[A-z]*", "", colnames(prices)) #define our cash asset and keep track of which column it is cashAsset <- "VUSTX" cashCol <- grep(cashAsset, colnames(prices)) #start our data off on the security with the least data (VGSIX in this case) prices <- prices[!is.na(prices[,7]),] #cash is not a formal asset in our ranking cashPrices <- prices[, cashCol] prices <- prices[, -cashCol]
Nothing anybody hasn’t seen before up to this point. Get data, start it off at most recent inception mutual fund, separate the cash prices, moving along.
What follows is a rather rough implementation of QTS, not wrapped up in any sort of function that others can plug and play with (though I hope I made the code readable enough for others to tinker with).
Let’s define parameters and compute momentum.
#define our parameters nShort <- 20 nLong <- 105 nMonthSMA <- 3 #compute momentums rocShort <- prices/lag(prices, nShort) - 1 rocLong <- prices/lag(prices, nLong) - 1
Now comes some endpoints functionality (or, more colloquially, magic) that the xts library provides. It’s what allows people to get work done in R much faster than in other programming languages.
#take the endpoints of quarter start/end quarterlyEps <- endpoints(prices, on="quarters") monthlyEps <- endpoints(prices, on = "months") #take the prices at quarterly endpoints quarterlyPrices <- prices[quarterlyEps,] #short momentum at quarterly endpoints (20 day) rocShortQtrs <- rocShort[quarterlyEps,] #long momentum at quarterly endpoints (105 day) rocLongQtrs <- rocLong[quarterlyEps,]
In short, get the quarterly endpoints (and monthly, we need those for the monthly SMA which you’ll see shortly) and subset our momentum computations on those quarterly endpoints. Now let’s get the total rank for those subset-on-quarters momentum computations.
#rank short momentum, best highest rank rocSrank <- t(apply(rocShortQtrs, 1, rank)) #rank long momentum, best highest rank rocLrank <- t(apply(rocLongQtrs, 1, rank)) #total rank, long slightly higher than short, sum them totalRank <- 1.01*rocLrank + rocSrank #function that takes 100% position in highest ranked security maxRank <- function(rankRow) { return(rankRow==max(rankRow)) } #apply above function to our quarterly ranks every quarter rankPos <- t(apply(totalRank, 1, maxRank))
So as you can see, I rank the momentum computations by row, take a weighted sum (in slight favor of the long momentum), and then simply take the security with the highest rank at every period, giving me one 1 in every row and 0s otherwise.
Now let’s do the other end of what determines position, which is the SMA filter. In this case, we need monthly data points for our three-month SMA, and then subset it to quarters to be on the same timescale as the quarterly ranks.
#SMA of securities, only use monthly endpoints #subset to quarters #then filter monthlyPrices <- prices[monthlyEps,] monthlySMAs <- xts(apply(monthlyPrices, 2, SMA, n=nMonthSMA), order.by=index(monthlyPrices)) quarterlySMAs <- monthlySMAs[index(quarterlyPrices),] smaFilter <- quarterlyPrices > quarterlySMAs
Now let’s put it together to get our final positions. Our cash position is simply one if we don’t have a single investment in the time period, zero else.
finalPos <- rankPos*smaFilter finalPos <- finalPos[!is.na(rocLongQtrs[,1]),] cash <- xts(1-rowSums(finalPos), order.by=index(finalPos)) finalPos <- merge(finalPos, cash, join='inner')
Now we can finally compute our strategy returns.
prices <- merge(prices, cashPrices, join='inner') returns <- Return.calculate(prices) stratRets <- Return.portfolio(returns, finalPos) table.AnnualizedReturns(stratRets) maxDrawdown(stratRets) charts.PerformanceSummary(stratRets) plot(log(cumprod(1+stratRets)))
So what do things look like?
Like this:
> table.AnnualizedReturns(stratRets) portfolio.returns Annualized Return 0.1899 Annualized Std Dev 0.1619 Annualized Sharpe (Rf=0%) 1.1730 > maxDrawdown(stratRets) [1] 0.1927991
And since the first equity curve doesn’t give much of an indication in the early years, I’ll take Tony Cooper’s (of Double Digit Numerics) advice and show the log equity curve as well.
In short, from 1997 through 2002, this strategy seemed to be going nowhere, and then took off. As I was able to get this backtest going back to 1997, it makes me wonder why it was only started in 2003 for the SeekingAlpha article, since even with 1997-2002 thrown in, this strategy’s risk/reward profile still looks fairly solid. CAR about 1 (slightly less, but that’s okay, for something that turns over so infrequently, and in so few securities!), and a Sharpe higher than 1. Certainly better than what the market itself offered over the same period of time for retail investors. Perhaps Cliff Smith himself could chime in regarding his choice of time frame.
In any case, Cliff Smith marketed the strategy as having a higher than 28% CAGR, and his article was published on August 15, 2014, and started from 2003. Let’s see if we can replicate those results.
stratRets <- stratRets["2002-12-31::2014-08-15"] table.AnnualizedReturns(stratRets) maxDrawdown(stratRets) charts.PerformanceSummary(stratRets) plot(log(cumprod(1+stratRets)))
Which results in this:
> table.AnnualizedReturns(stratRets) portfolio.returns Annualized Return 0.2862 Annualized Std Dev 0.1734 Annualized Sharpe (Rf=0%) 1.6499 > maxDrawdown(stratRets) [1] 0.1911616
A far improved risk/return profile without 1997-2002 (or the out-of-sample period after Cliff Smith’s publishing date). Here are the two equity curves in-sample.
In short, the results look better, and the SeekingAlpha article’s results are validated.
Now, let’s look at the out-of-sample periods on their own.
stratRets <- Return.portfolio(returns, finalPos) earlyOOS <- stratRets["::2002-12-31"] table.AnnualizedReturn(earlyOOS) maxDrawdown(earlyOOS) charts.PerformanceSummary(earlyOOS)
Here are the results:
> table.AnnualizedReturns(earlyOOS) portfolio.returns Annualized Return 0.0321 Annualized Std Dev 0.1378 Annualized Sharpe (Rf=0%) 0.2327 > maxDrawdown(earlyOOS) [1] 0.1927991
And with the corresponding equity curve (which does not need a log-scale this time).
In short, it basically did nothing for an entire five years. That’s rough, and I definitely don’t like the fact that it was left off of the SeekingAlpha article, as anytime I can extend a backtest further back than a strategy’s original author and then find skeletons in the closet (as happened for each and every one of Harry Long’s strategies), it sets off red flags on this end, so I’m hoping that there’s some good explanation for leaving off 1997-2002 that I’m simply failing to mention.
Lastly, let’s look at the out-of-sample performance.
lateOOS <- stratRets["2014-08-15::"] charts.PerformanceSummary(lateOOS) table.AnnualizedReturns(lateOOS) maxDrawdown(lateOOS)
With the following results:
> table.AnnualizedReturns(lateOOS) portfolio.returns Annualized Return 0.0752 Annualized Std Dev 0.1426 Annualized Sharpe (Rf=0%) 0.5277 > maxDrawdown(lateOOS) [1] 0.1381713
And the following equity curve:
Basically, while it’s ugly, it made new equity highs over only two more transactions (and in such a small sample size, anything can happen), so I’ll put this one down as a small, ugly win, but a win nevertheless.
If anyone has any questions or comments about this strategy, I’d love to see them, as this is basically a first-pass replica. To Mr. Cliff Smith’s credit, the results check out, and when the worst thing one can say about a strategy is that it had a period of a flat performance (aka when the market crested at the end of the Clinton administration right before the dot-com burst), well, that’s not the worst thing in the world. Certainly, I’d call that a lot better than extending the performance to find an unreported 40% max drawdown (all of the leveraged SPY + vol ETF strategies I’ve demonstrated), or having the investment thesis demonstrating much more disappointing performance to the point of bordering on a falsehood over a longer time horizon (Zomma warthog index).
More replications (including one requested by several readers) will be upcoming.
Thanks for reading.
NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.