Site icon R-bloggers

An Attempt At Replicating Flexible Asset Allocation (FAA)

[This article was first published on QuantStrat TradeR » R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Since the people at Alpha Architect were so kind as to feature my blog in a post, I figured I’d investigate an idea that I first found out about from their site–namely, flexible asset allocation. Here’s the SSRN, and the corresponding Alpha Architect post.

Here’s the script I used for this replication, which is completely self-contained.

require(PerformanceAnalytics)

mutualFunds <- c("VTSMX", #Vanguard Total Stock Market Index
                 "FDIVX", #Fidelity Diversified International Fund
                 "VEIEX", #Vanguard Emerging Markets Stock Index Fund
                 "VFISX", #Vanguard Short-Term Treasury Fund
                 "VBMFX", #Vanguard Total Bond Market Index Fund
                 "QRAAX", #Oppenheimer Commodity Strategy Total Return 
                 "VGSIX" #Vanguard REIT Index Fund
)
                 
#mid 1997 to end of 2012
getSymbols(mutualFunds, from="1997-06-30", to="2012-12-31")
tmp <- list()
for(fund in mutualFunds) {
  tmp[[fund]] <- Ad(get(fund))
}

#always use a list hwne intending to cbind/rbind large quantities of objects
adPrices <- do.call(cbind, args = tmp)
colnames(adPrices) <- gsub(".Adjusted", "", colnames(adPrices))

FAAreturns <- function(prices, monthLookback = 4,
                                 weightMom=1, weightVol=.5, weightCor=.5, 
                                 riskFreeName="VFISX", bestN=3) {
  
  returns <- Return.calculate(prices)
  monthlyEps <- endpoints(prices, on = "months")
  riskFreeCol <- grep(riskFreeName, colnames(prices))
  tmp <- list()
  dates <- list()
  
  for(i in 2:(length(monthlyEps) - monthLookback)) {
    
    #subset data
    priceData <- prices[monthlyEps[i]:monthlyEps[i+monthLookback],]
    returnsData <- returns[monthlyEps[i]:monthlyEps[i+monthLookback],]
    
    #perform computations
    momentum <- data.frame(t(t(priceData[nrow(priceData),])/t(priceData[1,]) - 1))
    priceData <- priceData[, momentum > 0] #remove securities with momentum < 0
    returnsData <- returnsData[, momentum > 0]
    momentum <- momentum[momentum > 0]
    names(momentum) <- colnames(returnsData)
    
    vol <- as.numeric(-sd.annualized(returnsData))
    #sumCors <- -colSums(cor(priceData[endpoints(priceData, on="months")]))
    sumCors <- -colSums(cor(returnsData, use="complete.obs"))
    stats <- data.frame(cbind(momentum, vol, sumCors))
    
    if(nrow(stats) > 1) {
      
      #perform ranking
      ranks <- data.frame(apply(stats, 2, rank))
      weightRankSum <- weightMom*ranks$momentum + weightVol*ranks$vol + weightCor*ranks$sumCors
      totalRank <- rank(weightRankSum)
      
      #find top N values, from http://stackoverflow.com/questions/2453326/fastest-way-to-find-second-third-highest-lowest-value-in-vector-or-column
      #thanks to Dr. Rob J. Hyndman
      upper <- length(names(returnsData))
      lower <- max(upper-bestN+1, 1)
      topNvals <- sort(totalRank, partial=seq(from=upper, to=lower))[c language="(upper:lower)"][/c]
      
      #compute weights
      longs <- totalRank %in% topNvals #invest in ranks length - bestN or higher (in R, rank 1 is lowest)
      longs <- longs/sum(longs) #equal weight all candidates
      longs[longs > 1/bestN] <- 1/bestN #in the event that we have fewer than top N invested into, lower weights to 1/top N
      names(longs) <- rownames(ranks)
      
    } else if(nrow(stats) == 1) { #only one security had positive momentum 
      longs <- 1/bestN
      names(longs) <- rownames(stats)
    } else { #no securities had positive momentum 
      longs <- 1
      names(longs) <- riskFreeName
    }
    
    #append removed names (those with momentum < 0)
    removedZeroes <- rep(0, ncol(returns)-length(longs))
    names(removedZeroes) <- names(returns)[!names(returns) %in% names(longs)]
    longs <- c(longs, removedZeroes)
    
    #reorder to be in the same column order as original returns/prices
    longs <- data.frame(t(longs))
    longs <- longs[, names(returns)]
    
    #append lists
    tmp[[i]] <- longs
    dates[[i]] <- index(returnsData)[nrow(returnsData)]
  }
  
  weights <- do.call(rbind, tmp)
  dates <- do.call(c, dates)
  weights <- xts(weights, order.by=as.Date(dates)) 
  weights[, riskFreeCol] <- weights[, riskFreeCol] + 1-rowSums(weights)
  strategyReturns <- Return.rebalancing(R = returns, weights = weights, geometric = FALSE)
  return(strategyReturns)
}

replicaAttempt <- FAAreturns(adPrices)
bestN4 <- FAAreturns(adPrices, bestN=4)
N3vol1cor1 <- FAAreturns(adPrices, weightVol = 1, weightCor = 1)
minRisk <- FAAreturns(adPrices, weightMom = 0, weightVol=1, weightCor=1)
pureMomentum <- FAAreturns(adPrices, weightMom=1, weightVol=0, weightCor=0)
maxDecor <- FAAreturns(adPrices, weightMom=0, weightVol=0, weightCor=1)
momDecor <- FAAreturns(adPrices, weightMom=1, weightVol=0, weightCor=1)

all <- cbind(replicaAttempt, bestN4, N3vol1cor1, minRisk, pureMomentum, maxDecor, momDecor)
colnames(all) <- c("Replica Attempt", "N4", "vol_1_cor_1", "minRisk", "pureMomentum", "maxDecor", "momDecor")
charts.PerformanceSummary(all, colorset=c("black", "red", "blue", "green", "darkgrey", "purple", "orange"))

stats <- data.frame(t(rbind(Return.annualized(all)*100,
      maxDrawdown(all)*100,
      SharpeRatio.annualized(all))))
stats$Return_To_Drawdown <- stats[,1]/stats[,2]

Here’s the formal procedure:

Using the monthly endpoint functionality in R, every month, looking over the past four months, I computed momentum as the most recent price over the first price in the observed set (that is, the price four months ago) minus one, and instantly removed any funds with a momentum less than zero (this was a suggestion from Mr. David Varadi of CSS Analytics, with whom I’ll be collaborating in the near future). Next, with the pared down universe, I ranked the funds by momentum, by annualized volatility (the results are identical with just standard deviation), and by the sum of the correlations with each other. Since volatility and correlation are worse at higher values, I multiplied each by negative one. Next, I invested in the top N funds every period, or if there were fewer than N funds with positive momentum, each remaining fund received a weight of 1/N, with the rest eventually being placed into the “risk-free” asset, in this case, VFISX. All price and return data were daily adjusted (as per the SSRN paper) data.

However, my results do not match the paper’s (or Alpha Architect’s) in that I don’t see the annualized returns breaking 20%, nor, most importantly, do I see the single-digit drawdowns. I hope my code is clear for every step as to what the discrepancy may be, but that aside, let me explain what the idea is.

The idea is, from those that are familiar with trend following, that in addition to seeking return through the momentum anomaly (stacks of literature available on the simple idea that what goes up will keep going up to an extent), that there is also a place for risk management. This comes in the form of ranking correlation and volatility, and giving different weights to each individual component rank (that is, momentum has a weight of 1, correlation .5, and volatility .5). Next, the weighted sum of the ranks is then also ranked (so two layers of ranking) for a final aggregate rank.

Unfortunately, when it comes to the implementation, the code has to be cluttered with some data munging and edge-case checking, which takes a little bit away from the readability. To hammer a slight technical tangent home, in R, whenever one plans on doing iterated appending (E.G. one table that’s repeatedly appended), due to R copying an object on assignment when doing repeated rbinding or cbinding, but simply appending the last iteration onto a list object, outside of tiny data frames, it’s always better to use a list and only call rbind/cbind once at the end. The upside to data frames is that they’re much easier to print out to a console and to do vectorized operations on. However, lists are more efficient when it comes to iteration.

In any case, here’s an examination of some variations of this strategy.

The first is a simple attempt at replication (3 of 7 securities, 1 weight to momentum, .5 to volatility and correlation each). The second is that same setting, just with the top four securities instead of the top three. A third one is with three securities, but double the weighting to the two risk metrics (vol & cor). The next several are conceptual permutations–a risk minimization profile that puts no weight on the actual nature of momentum (analogous to what the Smart Beta folks would call min-vol), a pure momentum strategy (disregard vol and cor), a max decorrelation strategy (all weight on correlation), and finally, a hybrid of momentum and max decorrelation.

Here is the performance chart:

Overall, this looks like evidence of robustness, given that I fundamentally changed the nature of the strategies in quite a few cases, rather than simply tweaked the weights here or there. The momentum/decorrelation hybrid is a bit difficult to see, so here’s a clearer image for how it compared with the original strategy.

Overall, a slightly smoother ride, though slightly lower in terms of returns. Here’s the table comparing all seven variations:

> stats
                Annualized.Return Worst.Drawdown Annualized.Sharpe.Ratio..Rf.0.. Return_To_Drawdown
Replica Attempt          14.43802      13.156252                        1.489724          1.0974268
N4                       12.48541      10.212778                        1.492447          1.2225281
vol_1_cor_1              12.86459      12.254390                        1.608721          1.0497944
minRisk                  11.26158       9.223409                        1.504654          1.2209786
pureMomentum             13.88501      14.401121                        1.135252          0.9641619
maxDecor                 11.89159      11.685492                        1.434220          1.0176368
momDecor                 14.03615      10.951574                        1.489358          1.2816563

Overall, there doesn’t seem to be any objectively best variant, though pure momentum is definitely the worst (as may be expected, otherwise the original paper wouldn’t be as meaningful). If one is looking for return to max drawdown, then the momentum/max decorrelation hybrid stands out, though the 4-security variant and minimum risk variants also work (though they’d have to be leveraged a tiny bit to get the annualized returns to the same spot). On Sharpe Ratio, the variant with double the original weighting on volatility and correlation stands out, though its return to drawdown ratio isn’t the greatest.

However, the one aspect that I take away from this endeavor is that the number of assets were relatively tiny, and the following statistic:

> SharpeRatio.annualized(Return.calculate(adPrices))
                                    VTSMX     FDIVX     VEIEX    VFISX    VBMFX       QRAAX     VGSIX
Annualized Sharpe Ratio (Rf=0%) 0.2520994 0.3569858 0.2829207 1.794041 1.357554 -0.01184516 0.3062336

Aside from the two bond market funds, which are notorious for lower returns for lower risk, the Sharpe ratios of the individual securities are far below 1. The strategy itself, on the other hand, has very respectable Sharpe ratios, working with some rather sub-par components.

Simply put, consider running this asset allocation heuristic on your own set of strategies, as opposed to pre-set funds. Furthermore, it is highly likely that the actual details of the ranking algorithm can be improved, from different ranking metrics (add drawdown?) to more novel concepts such as stepwise correlation ranking/selection.

Thanks for reading.


To leave a comment for the author, please follow the link and comment on their blog: QuantStrat TradeR » R.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.