Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Unsupervised machine learning methods such as hierarchical clustering allow us to discover the trends and patterns of similarity within the data. Here, I demonstrate by using a test data, how to apply the Hierarchical clustering on columns of a test data matrix. Note that as my main focus is Bioinformatics application, I assume that the columns of the matrix represent individual samples and the rows represent the genes or transcripts or some other biological feature. However, as the application of clustering algorithms are not restricted to biology the rows or the column of the matrix may represent other things based on the field of research ! For the distance metric, I will use the Spearman correlation based distance supported by the Dist function of amap package. For a skewed data, it is a good idea to check the similarity of the orders of the values rather than their linear relationship (i.e. Pearson correlation) or how geometrically close the values are (i.e. Euclidean distance). For more info, you can see an example that I provided in one of my previous posts on how Spearman correlation may discover associations more efficiently for a skewed data.
values<- matrix(rnorm(1000),ncol=20)
colnames(values)<- paste(“col”,1:20,sep=””)
library(amap)
hRes<- hclust(Dist(t(values), method=”spearman”))
plot(hRes)
After running Hierarchical clustering we can cut the result binary tree at a certain depth or request that it be cut in a manner that would result a certain number of clusters. Here, I request that the resulted binary tree be cut in away that would result to 2 sample clusters. Furthermore, I convert the resulted tree to a “dendogram” object and colour the branches and the labels of the tree to visualize the 2 clusters. One can use color_branches and color_labels functions to cut and colour the trees.
library(dendextend)
# Cut and colour
hResDen<- as.dendrogram(hRes)
hResCut<- cutree(hResDen,2)
hResDen <- color_branches(hResDen, k= 2)
hResDen <- color_labels(hResDen, k= 2)
plot(hResDen)
Alternatively, one can use color_branches and color_labels functions to manually define the colours of the labels and the branches of the tree.
# manual colouring based on cut results
colours<- c(2,3)
hResDen<- as.dendrogram(hRes)
colOrder<- hRes$order
hResDen <- color_branches(hResDen,clusters=hResCut[colOrder],col=colours)
lableCol<- colours
names(lableCol)<- unique(hResCut[colOrder])
hResDen <- color_labels(hResDen,col=lableCol[as.character(hResCut[colOrder])])
plot(hResDen)
But what if we want to colour the branches and the labels of the tree based on a predefined grouping of the samples ? Here, we colour the labels and the edges leading to them to visualize the position of “class1”, “class2” and “class3” samples in the tree.
# Manual colouring based on some predefined classes
sampleClass<- c(rep(“class1”,5), rep(“class2”,6), rep(“class3”,9))
colours<- c(“lightblue”,”green”, “red”)
hResDen<- as.dendrogram(hRes)
colOrder<- hRes$order
hResDen <- color_branches(hResDen,clusters=as.numeric(as.factor(sampleClass[colOrder])),col=colours)
lableCol<- colours
names(lableCol)<- unique(sampleClass[colOrder])
hResDen <- color_labels(hResDen,col=lableCol[as.character(sampleClass[colOrder])])
plot(hResDen)
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.