[This article was first published on YGC » R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
The K-means algorithm is a method to automatically cluster similar data examples together.
The intuition behind K-means is an iterative procedure that starts by guessing the initial centroids, and then refines this guess by repeatedly assigning examples to their closest centroids and then recomputing the centroids based on the assignments.
This algorithm was implemented as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 | kMeansInitCentroids <- function(X, K) { rand.idx <- sample(1:nrow(X), K) centroids <- X[rand.idx,] return(centroids) } findClosestCentroids <- function(X, centroids) { ## finding closest centroids # set K K <- nrow(centroids) idx <- sapply(1:nrow(X), function(i) { which.min( sapply(1:K, function(j) { sum( (X[i,]-centroids[j,])^2 ) }) ) }) return(idx) } computeCentroids <- function(X, idx, K) { centroids <- sapply(1:K, function(i) colMeans(X[idx == i,])) centroids <- t(centroids) return(centroids) } runkMeans <- function(X, K, max.iter = 10, plot=F, plot.progress=F) { initCentroids <- kMeansInitCentroids(X, K) K <- nrow(initCentroids) centroids <- initCentroids preCentroids <- centroids for (i in 1:max.iter) { idx <- findClosestCentroids(X, centroids) centroids <- computeCentroids(X, idx, K) if (plot.progress) { preCentroids <- rbind(preCentroids, centroids) } } xx <- data.frame(X, cluster=as.factor(idx)) if(plot) { p <- ggplot(xx, aes(V1, V2))+geom_point(aes(color=cluster)) if (plot.progress) { preCentroids <- data.frame(preCentroids, idx=rep(1:3, max.iter+1)) p <- p+geom_point(data=preCentroids, aes(x=V1, y=V2)) + geom_path(data=preCentroids, aes(x=V1, y=V2, group=idx)) } print(p) } return(xx) } |
After implemented this algorithm, I applied it to the dataset provided in ML-Class Ex 7.
1 2 3 4 | ## dataset was converted from ML-class exercise 7 ex7data2.mat. X <- read.delim("d:/ex7data2.txt", header=F) K <- 3 xx <- runkMeans(X,K, plot=TRUE, plot.progress=TRUE) |
The K-means code produced a visualization that steps the progress of the algorithm at each iteration.
Related Posts
To leave a comment for the author, please follow the link and comment on their blog: YGC » R.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.