Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Summary
Recently, there has been a new research paper coming out with the goal of improving Markowitz’s Critical Line Algorithm (CLA) published by Marcos Lopez de Prado. The methodology is also currently patent pending and the paper can be downloaded here. The methodology suggested by the paper proposes the Hierarchical Risk Parity (HRP) approach. The HRP approach aims at tackling the issues of the original CLA regarding instability, concentration, and underperformance.
HRP applies modern mathematics (graph theory and machine learning techniques) to build a diversified portfolio based on the information contained in the covariance matrix. However, unlike quadratic optimizers, HRP does not require the invertibility of the covariance matrix. In fact, HRP can compute a portfolio on an ill-degenerated or even a singular covariance matrix, an impossible feat for quadratic optimizers. Monte Carlo experiments show that HRP delivers lower out-of-sample variance than CLA, even though minimum-variance is CLA’s optimization objective. HRP also produces less risky portfolios out-of-sample compared to traditional risk parity methods.
The main idea of HRP is to allocate weights to a portfolio of securities based on
- the clusters formed by securities (determined on how each security correlates to the portfolio)
- the volatility of each cluster (more volatile clusters receive lesser weighting, and vice versa)
This post demonstrates a Rcpp + OpenMP implementation of the HRP methodology suggested by the paper. It uses security returns as input and churns out a weighting vector applies to all securities involved.
The computation is split into four stages
- Compute Distance Matrix
- Clusterize Securities
- Quasi-Diagonalize (QD) the Covariance Matrix
- Generate Security Weighting
Compute Distance Matrix
In the HRP paper, clusters is defined by a group of securities that similarly correlates with other securities within the portfolio.
First, we compute a n by n distance matrix based on the correlation matrix on the n assets. The distance is defined as which produces the distance between each asset. The lower the the “distance”, the more correlated two assets are. This step is implemented in the distanceMatrix_elementwise
function.
Secondly, we compute the Euclidean distance between the column-vectors of the distance matrix. This measures the similarity between two asset on how they correlates to the portfolio. The lower the distance, the more similar two assets’ correlations with the portfolio are. This step is implemented in the distanceMatrix_rowwise
function.
Cluster Generation
Provided the matrix of similarities between each assets, we proceed to the clustering step to group securities into a hierarchy of clusters.
During each iteration, we pick a set of two most similar securities based on the distance matrix generated from the previous step, group them together as a cluster, and replace this cluster with a generalizing branch. In this implementation, the generalizaing branch is created using the nearest point algorithm. For branch consists of security and , the similarty with all remaining securities in the portfolio is calculated as
At the end of the clustering step, we have a matrix where stands for the number of clusters. The first two elements consist of branch index (can by both a security or a generalizing branch). The third element is the similarity/distance between the two branches, and the last element indicates the number of securities in the cluster.
Quasi-Diagonalization
Provided the clusterization from the last step, we want tp re-organize the covariance matrix so the indexing follows clusters. In order to achive this, we need to first “flatten the clusters” based on the matrix generated from the last step. This step is implemented in function clusterIndex
with a recursive call on flatCluster
, the result is a security index based on generated clusters from previous step.
In the flatCluster
function, we start with the last cluster generated and trace back to its components based on the first two element in the cluster matrix. If a component is a generalizing branch, the function calls itself recursively to trace back the components of the generalizing brach, until the index of a security is returned. task
constructs from OpenMP
is used to speed up the process by teating each “trace back” as a task, and a taskwait
construct is used to ensure the security index is generated from a bottom-up approach.
With the cluster based security index generated. quasiDiag
function re-arranges the covariance matrix based on the new index into a quasi-diagonoal covariance matrix. In this way, “similar” securities are group together for the weight allocation step. To re-iterate, the HRP approach first divides securities into clusters and then allocates weightings based on each clusters’ risk level.
Weighting Generation
With the re-organized quasi-diagonal covariance matrix and the asset index of clustered securities, we proceed into weight allocation.
As stated in the paper, the inverse-variance allocation is optimal for a diagonal covariance matrix. This step takes the advantage of this by
- defining the variance of a set as the variance for inverse-variance allocation
- split the allocations between adjacent subsets in inverse proportion to their aggregated variances
We initialize the weighting to each security to 1, .
The allocation algorithm is as follows
- bisect the portfolio into two sets, and
- let be the covariance matrix for set
- let
- let
- let
- adjust weightings for each set as
The implementation is done in function weightAllocation
with recursive calls on bisectWeightAllocation
. Similar to the cluster flatenning step, task
constructs from OpenMP
is used to speed up the process by treating each bisection step as a task. A taskwait
construct is not required in this step as the update on the weight vector is top-down, and child tasks (further bisection steps) are not generated until the parent task (current bisection step) is finished.
With Rcpp and OpenMP, the speed of the computation competitive when it is used for backtesting resuls in faster performance. The test data is based on a return matrix of 30 securities with 2500 data points.
replications | elapsed |
---|---|
1000 | 8.294 |
System and R-Packages Information
R version 3.3.0 (2016-05-03) Platform: i686-pc-linux-gnu (32-bit) Running under: Ubuntu 16.04 LTS locale: [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C [3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8 [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 [7] LC_PAPER=en_US.UTF-8 LC_NAME=C [9] LC_ADDRESS=C LC_TELEPHONE=C [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C attached base packages: [1] stats graphics grDevices utils datasets base other attached packages: [1] data.table_1.9.6 rbenchmark_1.0.0 loaded via a namespace (and not attached): [1] magrittr_1.5 formatR_1.4 [3] tools_3.3.0 RcppArmadillo_0.6.700.6.0 [5] Rcpp_0.12.5 stringi_1.0-1 [7] highr_0.6 knitr_1.13 [9] methods_3.3.0 stringr_1.0.0 [11] chron_2.3-47 evaluate_0.9
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.