Area Plots with Intensity Coloring
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
I am not sure apeescape’s ggplot2 area plot with intensity colouring is really the best way of presenting the information, but it had me intrigued enough to replicate it using base R graphics.
The key technique is to draw a gradient line which R does not support natively so we have to roll our own code for that. Unfortunately, lines(..., type="l")
does not recycle the colour col=
argument, so we end up with rather more loops than I thought would be necessary.
(The answer is not to use lines(..., type="h")
which, confusingly, does recycle the colour col=
argument. This one had me for a while, but the type=h
lines always start from zero so you do not get the gradient feature.)
We also get a nice opportunity to use the under-appreciated read.fwf
function.
##!/usr/bin/Rscript ## nino.R - another version of http://bit.ly/9P9Gh1 ## Copyright © 2010 Allan Engelhardt (http://www.cybaea.net/) ## Get the data from the NOAA server nino <- read.fwf("http://www.cpc.noaa.gov/data/indices/wksst.for", widths=c(-1, 9, rep(c(-5, 4, 4), 4)), skip=4, col.names=c("Week", paste(rep(c("Nino12","Nino3","Nino34","Nino4"), rep(2, 4)), c("SST", "SSTA"), sep="."))) ## Make the date column something useful nino$Week <- as.Date(nino$Week, format="%d%b%Y") ## Make colour gradients ncol <- 50 grad.neg <- hsv(4/6, seq(0, 1, length.out=ncol), 1) # Blue gradient grad.pos <- hsv( 0, seq(0, 1, length.out=ncol), 1) # Red gradient ## Make plot plot(Nino34.SSTA ~ Week, data=nino, type="n", main="Nino34", xlab="Date", ylab="SSTA", axes=FALSE) do.call(function (...) rect(..., col="gray85", border=NA), as.list(par("usr")[c(1, 3, 2, 4)])) y <- nino$Nino34.SSTA # The values we will plot x <- nino$Week axis.Date(1, x=x, tck=1, col="white") axis(2, tck=1, col="white") box() idx <- integer(NROW(nino)) idx[y >= 0] <- 1 + round( y[y >= 0] * (ncol - 1) / max( y[y >= 0]), 0) idx[y < 0] <- 1 + round(-y[y < 0] * (ncol - 1) / max(-y[y < 0]), 0) draw.gradient <- function(x, ys, cols) { xs <- rep(x, 2) for (i in seq(1, length(ys)-1)) plot.xy(list(x=xs, y=c(ys[i], ys[i+1])), type="l", col=cols[i]) } for (i in 1:length(x)) { ys <- seq(0, y[i], length.out=idx[i]+1) cols <- (if (y[i] >=0) grad.pos else grad.neg) draw.gradient(x[i], ys, cols) }
The result is a decent gradient:
I deliberately omitted the scale legend on the right hand side following Allan’s First Law of Happy Graphics: Thou shall not present the same information twice.
For less dense information, you should increase the line width. That is left to the reader. (Hint: it is hard to get just right in base graphics, but lwd <- ceiling(par("pin")[1] / dev.size("in")[1] * dev.size("px")[1] / length(x))
could be a starting point for an approximation. We really need gradient-filled polygons in base R.)
Jump to comments.
You may also like these posts:
-
R code for Chapter 1 of Non-Life Insurance Pricing with GLM
Insurance pricing is backwards and primitive, harking back to an era before computers. One standard (and good) textbook on the topic is Non-Life Insurance Pricing with Generalized Linear Models by Esbjorn Ohlsson and Born Johansson. We have been doing some work in this area recently. Needing a robust internal training course and documented methodology, we have been working our way through the book again and converting the examples and exercises to R , the statistical computing and analysis platform. This is part of a series of posts containing elements of the R code.
-
R code for Chapter 2 of Non-Life Insurance Pricing with GLM
We continue working our way through the examples, case studies, and exercises of what is affectionately known here as “the two bears book” (Swedish björn = bear) and more formally as Non-Life Insurance Pricing with Generalized Linear Models by Esbjörn Ohlsson and Börn Johansson (Amazon UK | US ). At this stage, our purpose is to reproduce the analysis from the book using the R statistical computing and analysis platform, and to answer the data analysis elements of the exercises and case studies. Any critique of the approach and of pricing and modeling in the Insurance industry in general will wait for a later article.
-
Employee productivity as function of number of workers revisited
We have a mild obsession with employee productivity and how that declines as companies get bigger. We have previously found that when you treble the number of workers, you halve their individual productivity which is mildly scary. We revisit the analysis for the FTSE-100 constituent companies and find that the relation still holds four years later and across a continent.
-
R: Eliminating observed values with zero variance
I needed a fast way of eliminating observed values with zero variance from large data sets using the R statistical computing and analysis platform . In other words, I want to find the columns in a data frame that has zero variance. And as fast as possible, because my data sets are large, many, and changing fast. The final result surprised me a little.
-
Feature selection: Using the caret package
Feature selection is an important step for practical commercial data mining which is often characterised by data sets with far too many variables for model building. In a previous post we looked at all-relevant feature selection using the Boruta package while in this post we consider the same (artificial, toy) examples using the caret package. Max Kuhn kindly listed me as a contributor for some performance enhancements I submitted, but the genius behind the package is all his.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.