A Closer Update To David Varadi’s Percentile Channels Strategy
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
So thanks to seeing Michael Kapler’s implementation of David Varadi’s percentile channels strategy, I was able to get a better understanding of what was going on. It turns out that rather than looking at the channel value only at the ends of months, that the strategy actually keeps track of the channel’s value intra-month. So if in the middle of the month, you had a sell signal and at the end of the month, the price moved up to intra-channel values, you would still be on a sell signal rather than the previous month’s end-of-month signal. It’s not much different than my previous implementation when all is said and done (slightly higher Sharpe, slightly lower returns and drawdowns). In any case, the concept remains the same.
For this implementation, I’m going to use the runquantile function from the caTools package, which contains a function called runquantile that works like a generalized runMedian/runMin/runMax from TTR, once you’re able to give it the proper arguments (on default, its results are questionable).
Here’s the code:
require(quantmod) require(caTools) require(PerformanceAnalytics) require(TTR) getSymbols(c("LQD", "DBC", "VTI", "ICF", "SHY"), from="1990-01-01") prices <- cbind(Ad(LQD), Ad(DBC), Ad(VTI), Ad(ICF), Ad(SHY)) prices <- prices[!is.na(prices[,2]),] returns <- Return.calculate(prices) cashPrices <- prices[, 5] assetPrices <- prices[, -5] require(caTools) pctChannelPosition <- function(prices, dayLookback = 60, lowerPct = .25, upperPct = .75) { leadingNAs <- matrix(nrow=dayLookback-1, ncol=ncol(prices), NA) upperChannels <- runquantile(prices, k=dayLookback, probs=upperPct, endrule="trim") upperQ <- xts(rbind(leadingNAs, upperChannels), order.by=index(prices)) lowerChannels <- runquantile(prices, k=dayLookback, probs=lowerPct, endrule="trim") lowerQ <- xts(rbind(leadingNAs, lowerChannels), order.by=index(prices)) positions <- xts(matrix(nrow=nrow(prices), ncol=ncol(prices), NA), order.by=index(prices)) positions[prices > upperQ & lag(prices) < upperQ] <- 1 #cross up positions[prices < lowerQ & lag(prices) > lowerQ] <- -1 #cross down positions <- na.locf(positions) positions[is.na(positions)] <- 0 colnames(positions) <- colnames(prices) return(positions) } #find our positions, add them up d60 <- pctChannelPosition(assetPrices) d120 <- pctChannelPosition(assetPrices, dayLookback = 120) d180 <- pctChannelPosition(assetPrices, dayLookback = 180) d252 <- pctChannelPosition(assetPrices, dayLookback = 252) compositePosition <- (d60 + d120 + d180 + d252)/4 compositeMonths <- compositePosition[endpoints(compositePosition, on="months"),] returns <- Return.calculate(prices) monthlySD20 <- xts(sapply(returns[,-5], runSD, n=20), order.by=index(prices))[index(compositeMonths),] weight <- compositeMonths*1/monthlySD20 weight <- abs(weight)/rowSums(abs(weight)) weight[compositeMonths < 0 | is.na(weight)] <- 0 weight$CASH <- 1-rowSums(weight) #not actually equal weight--more like composite weight, going with #Michael Kapler's terminology here ewWeight <- abs(compositeMonths)/rowSums(abs(compositeMonths)) ewWeight[compositeMonths < 0 | is.na(ewWeight)] <- 0 ewWeight$CASH <- 1-rowSums(ewWeight) rpRets <- Return.portfolio(R = returns, weights = weight) ewRets <- Return.portfolio(R = returns, weights = ewWeight)
Essentially, with runquantile, you need to give it the “trim” argument, and then manually append the leading NAs, and then manually turn it into an xts object, which is annoying. One would think that the author of this package would take care of these quality-of-life issues, but no. In any case, there are two strategies at play here–one being the percentile channel risk parity strategy, and the other what Michael Kapler calls “channel equal weight”, which actually *isn’t* an equal weight strategy, since the composite parameter values may take the values (-1, -.5, 0, .5, and 1–with a possibility for .75 or .25 early on when some of the lookback channels still say 0 instead of only 1 or -1), but simply, the weights without taking into account volatility at all, but I’m sticking with Michael Kapler’s terminology to be consistent. That said, I don’t personally use Michael Kapler’s SIT package due to the vast differences in syntax between it and the usual R code I’m used to. However, your mileage may vary.
In any case, here’s the updated performance:
both <- cbind(rpRets, ewRets) colnames(both) <- c("RiskParity", "Equal Weight") charts.PerformanceSummary(both) rbind(table.AnnualizedReturns(both), maxDrawdown(both)) apply.yearly(both, Return.cumulative)
Which gives us the following output:
> rbind(table.AnnualizedReturns(both), maxDrawdown(both)) RiskParity Equal Weight Annualized Return 0.09380000 0.1021000 Annualized Std Dev 0.06320000 0.0851000 Annualized Sharpe (Rf=0%) 1.48430000 1.1989000 Worst Drawdown 0.06894391 0.1150246 > apply.yearly(both, Return.cumulative) RiskParity Equal Weight 2006-12-29 0.08352255 0.07678321 2007-12-31 0.05412147 0.06475540 2008-12-31 0.10663085 0.12212063 2009-12-31 0.11920721 0.19093131 2010-12-31 0.13756460 0.14594317 2011-12-30 0.11744706 0.08707801 2012-12-31 0.07730896 0.06085295 2013-12-31 0.06733187 0.08174173 2014-12-31 0.06435030 0.07357458 2015-02-17 0.01428705 0.01568372
In short, the more naive weighting scheme delivers slightly higher returns but pays dearly for those marginal returns with downside risk.
Here are the equity curves:
So, there you have it. The results David Varadi obtained are legitimate. But nevertheless, I hope this demonstrates how easy it is for the small details to make material differences.
Thanks for reading.
NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.