An Update On EAA and a Volatility Strategy
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Again, before starting this post, I’d like to inform readers that the book Quantitative Trading With R, written by Harry Georgakopoulos, with contributions from myself, is now available for order on Amazon. Already, it has garnered a pair of five-star reviews, and it deals not only with quantstrat, but with aspects such as spread trading, high frequency data, and options. I highly recommend it.
So, first things first, I want to inform everyone that EAA (that is, Elastic Asset Allocation, the new algorithm recently released by Dr. Wouter Keller a couple of weeks ago) is now in my IKTrading package. I made some modifications to deal with incongruous security starting dates (that is, handled NA momentum, and so on, similarly to the process in FAA). Again, no particular guarantees, but at this point, I think the algorithm won’t regularly break (but I may be missing some edge case, so feedback is always appreciated). Also, after thinking about it a bit more, I don’t foresee EAA as it stands being able to make use of a conditional correlation algorithm, since rather than using correlation simply for security selection, it uses correlations to make weighting decisions, which raises the question of what the correlation value of the first security would be. 0? -1? Ideas on how to address this are always welcome, since applying conditional correlation outside of a ranking context is a topic now of interest to me.
Furthermore, TrendXplorer has recently posted his own post on EAA after seeing mine on his blog. It is *very* comprehensive, and for those that are more inclined towards AmiBroker, you’ll be in Nirvana. It can be found here. Also, it seems he has done some work with another SeekingAlpha contributor named Cliff Smith (and seems to have worked hand in hand with him), and thus, had a far more positive experience than I did going solo replicating Harry Long’s strategies (or, if some of you may like, marketing materials). TrendXplorer did some work with a strategy called QTS, which I hope I’ll be able to cover in the near future. That can all be found here. So, I’d like to formally extend thanks to TrendXplorer for the work he has done with both EAA, and also pointing me towards yet another viable asset allocation strategy.
In terms of my own updated EAA, to test it out, I added Tesla Motors to the original seven securities. So let’s look at the as-of-now-current EAA.
"EAA" <- function(monthlyPrices, wR=1, wV=0, wC=.5, wS=2, errorJitter=1e-6, cashAsset=NULL, bestN=1+ceiling(sqrt(ncol(monthlyPrices))), enableCrashProtection = TRUE, returnWeights=FALSE, monthlyRiskFree=NULL) { returns <- Return.calculate(monthlyPrices) returns <- returns[-1,] #return calculation uses one observation if(!is.null(monthlyRiskFree)) { returnsRF <- Return.calculate(monthlyRiskFree) returnsRF <- returnsRF[-1,] } if(is.null(cashAsset)) { returns$zeroes <- 0 cashAsset <- "zeroes" warning("No cash security specified. Recommended to use one of: quandClean('CHRIS/CME_US'), SHY, or VFISX. Using vector of zeroes instead.") } cashCol <- grep(cashAsset, colnames(returns)) weights <- list() for(i in 1:(nrow(returns)-11)) { returnsData <- returns[i:(i+11),] #each chunk will be 12 months of returns data #per-month mean of cumulative returns of 1, 3, 6, and 12 month periods periodReturn <- ((returnsData[12,] + Return.cumulative(returnsData[10:12,]) + Return.cumulative(returnsData[7:12,]) + Return.cumulative(returnsData)))/22 if(!is.null(monthlyRiskFree)) { rfData <- returnsRF[i:(i+11),] rfReturn <- ((rfData[12,] + Return.cumulative(rfData[10:12,]) + Return.cumulative(rfData[7:12,]) + Return.cumulative(rfData)))/22 periodReturn <- periodReturn - as.numeric(rfReturn) } vols <- StdDev.annualized(returnsData) mktIndex <- xts(rowMeans(returnsData, na.rm=TRUE), order.by=index(returnsData)) #equal weight returns of universe cors <- cor(returnsData, mktIndex) #correlations to market index weightedRets <- periodReturn ^ wR weightedCors <- (1 - as.numeric(cors)) ^ wC weightedVols <- (vols + errorJitter) ^ wV wS <- wS + errorJitter z <- (weightedRets * weightedCors / weightedVols) ^ wS #compute z_i and zero out negative returns z[periodReturn < 0] <- 0 crashProtection <- sum(z==0, na.rm=TRUE)/sum(!is.na(z)) #compute crash protection cash cushion orderedZ <- sort(as.numeric(z), decreasing=TRUE) selectedSecurities <- z >= orderedZ[bestN] preNormalizedWeights <- z*selectedSecurities #select top N securities, keeping z_i scores periodWeights <- preNormalizedWeights/sum(preNormalizedWeights, na.rm=TRUE) #normalize if (enableCrashProtection) { periodWeights <- periodWeights * (1-crashProtection) #CP rule } periodWeights[is.na(periodWeights)] <- 0 weights[[i]] <- periodWeights } weights <- do.call(rbind, weights) weights[, cashCol] <- weights[, cashCol] + 1-rowSums(weights) #add to risk-free asset all non-invested weight strategyReturns <- Return.rebalancing(R = returns, weights = weights) #compute strategy returns if(returnWeights) { return(list(weights, strategyReturns)) } else { return(strategyReturns) } }
Essentially, little changed aside from some lines dealing with NAs (AKA securities that were not yet around at the time whose prices are given as NAs).
To test out whether the algorithm worked, I added TSLA to see if it didn’t break the code. Here is the new test code.
require(quantmod) require(PerformanceAnalytics) symbols <- c("VTSMX", "FDIVX", "VEIEX", "VBMFX", "VFISX", "VGSIX", "QRAAX", "TSLA") getSymbols(symbols, from="1990-01-01") prices <- list() for(i in 1:length(symbols)) { prices[[i]] <- Ad(get(symbols[i])) } prices <- do.call(cbind, prices) colnames(prices) <- gsub("\.[A-z]*", "", colnames(prices)) ep <- endpoints(prices, "months") prices <- prices[ep,] prices <- prices["1997-03::"] getSymbols("^IRX", from="1990-01-01") dailyYield <- (1+(Cl(IRX)/100))^(1/252) - 1 threeMoPrice <- cumprod(1+dailyYield) threeMoPrice <- threeMoPrice["1997-03::"] threeMoPrice <- threeMoPrice[endpoints(threeMoPrice, "months"),] offensive <- EAA(prices, cashAsset="VBMFX", bestN=3) defensive <- EAA(prices, cashAsset="VBMFX", bestN=3, wS=.5, wC=1) offRF <- EAA(prices, cashAsset="VBMFX", bestN=3, monthlyRiskFree = threeMoPrice) defRF <- EAA(prices, cashAsset="VBMFX", bestN=3, wS=.5, wC=1, monthlyRiskFree = threeMoPrice) compare <- cbind(offensive, defensive, offRF, defRF) colnames(compare) <- c("Offensive", "Defensive", "OffRF", "DefRF") stats <- rbind(Return.annualized(compare)*100, StdDev.annualized(compare)*100, maxDrawdown(compare)*100, SharpeRatio.annualized(compare)) rownames(stats)[3] <- "Worst Drawdown" charts.PerformanceSummary(compare) stats
With the following statistics table and equity curve:
> stats Offensive Defensive OffRF DefRF Annualized Return 17.6174693 13.805683 16.7376777 13.709368 Annualized Standard Deviation 22.7328695 13.765444 22.3854966 13.504313 Worst Drawdown 25.3534015 12.135310 25.3559118 12.146654 Annualized Sharpe Ratio (Rf=0%) 0.7749778 1.002923 0.7477019 1.015184
Essentially, TSLA — a high momentum, high-volatility stock causes some consternation in the offensive variant of the algorithm. Let’s look at the weight statistics of TSLA when it was in the portfolio.
test <- EAA(prices, cashAsset = "VBMFX", bestN=3, returnWeights=TRUE) weights <- test[[1]] summary(weights$TSLA[weights$TSLA > 0])
With the results:
Index TSLA Min. :2011-07-29 Min. :0.01614 1st Qu.:2012-09-14 1st Qu.:0.32345 Median :2013-07-31 Median :0.48542 Mean :2013-06-20 Mean :0.51415 3rd Qu.:2014-04-15 3rd Qu.:0.75631 Max. :2014-12-31 Max. :0.95793
Also, to be clear, R’s summary function was not created with xts type objects in mind, so the Index statistics are just pure nonsense (R is trying to do summary statistics on the underlying numerical values of the date index — they have no relation to the TSLA weights), so if you ever call summary on anything in an xts, be aware that it isn’t actually providing you the dates of the corresponding weights (if they exist at all — E.G. the mean of the weights isn’t an actual weight at any point in time).
In any case, it seems that the offensive variant of the algorithm is susceptible to creating portfolios that are very poorly diversified, since the offensive variant doesn’t place any weight on security volatility–simply correlation. So if there was a very volatile instrument that was on a roaring trend, EAA would tell you to just place your entire portfolio in that one instrument–which of course, can be the correct thing to do if you know for certain that said trend will continue, until, of course, it doesn’t.
I’m sure there are still some methods to account for instruments of wildly different risk/return profiles, even without the need of additional code, by varying the parameters. I just wanted to demonstrate the need to be aware of this phenomenon, which I happened upon simply by testing the portfolio for incongruous starting dates and just so happened to pick a “hot topic” stock.
Last (for this post), I’d like to make readers aware that the blogger Volatility Made Simple has created a variant of a strategy I had written about earlier (again, thanks to Mr. Helmuth Vollmeier for providing the initial foundation), in which he mixed signals from the three variants I had found to be in stable regions, and I’m really happy he has done so, as he’s one of the first people who have explicitly extended my work.
Unfortunately, said strategy is currently in drawdown. However, looking at its drawdown curve against that of XIV itself, it seems that volatility has been doing crazy things lately, and the drawdown has been worse in the past. I am concerned, however, that it may be a strategy prone to overfitting, and it’s a constant reminder that there is still more to learn, and more techniques to use to convince oneself that a backtest isn’t just an overfit, data-mined, sample-dependent illusion with good marketing that will break down immediately upon looking at a larger sample. However, as I did not originate the strategy myself, I’d at least like to hope that whoever was the first person who came up with the VXV/VXMT ratio idea had some good rationale for the strategy to begin with.
In the immediate future, I’ll be looking into change point analysis and twitter’s new breakout detection package.
Thanks for reading.
NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.