Simulating Map-Reduce in R for Big Data Analysis Using Flights Data
[This article was first published on All Things R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
At datadolph.in, we are constantly crunching through large amounts of data. We are designing unique and innovative ways to process large datasets on a single node and use distributed computing only when single node computing becomes time consuming and less efficient.
We are happy to share with the R community one such unique map-reduce like approach we designed in R for a single node to process flights data (available here) which has ~122 million records and occupies 12GB of space when uncompressed. We used Mathew Dowle’s data.table package heavily to load and analyze large datasets.
It took us few days to stabilize and optimize this approach and we are very proud to share this approach and source code with you. The full source code can be found and downloaded from datadolph.in’s git repository.
Here is how we approached this problem: First, before loading the datasets in R, we compressed each of the 22 CSV files using gunzip for faster reading in R. The method read.csv can read gzip files faster than it can read uncompressed files:
# load list of all files
flights.files <- list.files(path=flights.folder.path, pattern="*.csv.gz")
# read files in data.table
flights <- data.table(read.csv(flights.files[i], stringsAsFactors=F))
Next, we mapped the analysis we wanted to run to extract insights from each of the datasets. This approach included extracting flight level, airlines level and airport level aggregated analysis and generating intermediate results. Here is example code to get stats for each airline by year:
Here is a copy of the map function:
#map all calculations
mapFlightStats <- function(){
for(j in 1:period) {
yr <- as.integer(gsub("[^0-9]", "", gsub("(.*)(\\.csv)", "\\1", flights.files[j])))
flights.data.file <- paste(flights.folder.path, flights.files[j], sep="")
if(verbose) cat(yr, “: Reading : “, flights.data.file, “\n”)
flights <- data.table(read.csv(flights.data.file, stringsAsFactors=F))
setkeyv(flights, c(“year”, “uniquecarrier”, “dest”, “origin”, “month”))
# call functions
getFlightStatsForYear(flights, yr)
getFlightsStatusByAirlines(flights, yr)
getFlightsStatsByAirport(flights, yr)
}
}
As one can see, we are generating intermediate results by airlines (and by airports / flights) for each year and storing it on the disk. The map function takes less than 2 hours to run on a MacBook Pro which had 2.3 GHZ dual core processor and 8 GB of memory and generated 132 intermediate datasets containing aggregated analysis.
And finally, we call the reduce function to aggregate intermediate datasets into final output (for flights, airlines and airports):
#reduce all results
reduceFlightStats <- function(){
n <- 1:6
folder.path <- paste("./raw-data/flights/stats/", n, "/", sep="")
print(folder.path)
for(i in n){
filenames <- paste(folder.path[i], list.files(path=folder.path[i], pattern="*.csv"), sep="")
dt <- do.call("rbind", lapply(filenames, read.csv, stringsAsFactors=F))
print(nrow(dt))
saveData(dt, paste(“./raw-data/flights/stats/”, i, “.csv”, sep=””))
}
}
datadolph.in users are using this analysis to produce micro-stories (we call them datagrams) on airport and airlines performance and sharing it on various social networks. Here is a sample micro-story:
This is what we are doing at datadolph.in – making data discovery, data visualization and telling data-stories as easy as 1-2-3. Happy Coding, Happy Analyzing and Happy Friday!
We are happy to share with the R community one such unique map-reduce like approach we designed in R for a single node to process flights data (available here) which has ~122 million records and occupies 12GB of space when uncompressed. We used Mathew Dowle’s data.table package heavily to load and analyze large datasets.
It took us few days to stabilize and optimize this approach and we are very proud to share this approach and source code with you. The full source code can be found and downloaded from datadolph.in’s git repository.
Here is how we approached this problem: First, before loading the datasets in R, we compressed each of the 22 CSV files using gunzip for faster reading in R. The method read.csv can read gzip files faster than it can read uncompressed files:
# load list of all files
flights.files <- list.files(path=flights.folder.path, pattern="*.csv.gz")
# read files in data.table
flights <- data.table(read.csv(flights.files[i], stringsAsFactors=F))
getFlightsStatusByAirlines <- function(flights, yr){
# by Year
if(verbose) cat(“Getting stats for airlines:”, ‘\n’)
airlines.stats <- flights[, list(
dep_airports=length(unique(origin)),
flights=length(origin),
flights_cancelled=sum(cancelled, na.rm=T),
flights_diverted=sum(diverted, na.rm=T),
flights_departed_late=length(which(depdelay > 0)),
flights_arrived_late=length(which(arrdelay > 0)),
total_dep_delay_in_mins=sum(depdelay[which(depdelay > 0)]),
avg_dep_delay_in_mins=round(mean(depdelay[which(depdelay > 0)])),
median_dep_delay_in_mins=round(median(depdelay[which(depdelay > 0)])),
miles_traveled=sum(distance, na.rm=T)
), by=uniquecarrier][, year:=yr]
#change col order
setcolorder(airlines.stats, c(“year”, colnames(airlines.stats)[-ncol(airlines.stats)]))
#save this data
saveData(airlines.stats, paste(flights.folder.path, “stats/5/airlines_stats_”, yr, “.csv”, sep=””))
#clear up space
rm(airlines.stats)
# continue.. see git full code
}
Here is a copy of the map function:
#map all calculations
mapFlightStats <- function(){
for(j in 1:period) {
yr <- as.integer(gsub("[^0-9]", "", gsub("(.*)(\\.csv)", "\\1", flights.files[j])))
flights.data.file <- paste(flights.folder.path, flights.files[j], sep="")
if(verbose) cat(yr, “: Reading : “, flights.data.file, “\n”)
flights <- data.table(read.csv(flights.data.file, stringsAsFactors=F))
setkeyv(flights, c(“year”, “uniquecarrier”, “dest”, “origin”, “month”))
# call functions
getFlightStatsForYear(flights, yr)
getFlightsStatusByAirlines(flights, yr)
getFlightsStatsByAirport(flights, yr)
}
}
As one can see, we are generating intermediate results by airlines (and by airports / flights) for each year and storing it on the disk. The map function takes less than 2 hours to run on a MacBook Pro which had 2.3 GHZ dual core processor and 8 GB of memory and generated 132 intermediate datasets containing aggregated analysis.
And finally, we call the reduce function to aggregate intermediate datasets into final output (for flights, airlines and airports):
#reduce all results
reduceFlightStats <- function(){
n <- 1:6
folder.path <- paste("./raw-data/flights/stats/", n, "/", sep="")
print(folder.path)
for(i in n){
filenames <- paste(folder.path[i], list.files(path=folder.path[i], pattern="*.csv"), sep="")
dt <- do.call("rbind", lapply(filenames, read.csv, stringsAsFactors=F))
print(nrow(dt))
saveData(dt, paste(“./raw-data/flights/stats/”, i, “.csv”, sep=””))
}
}
datadolph.in users are using this analysis to produce micro-stories (we call them datagrams) on airport and airlines performance and sharing it on various social networks. Here is a sample micro-story:
This is what we are doing at datadolph.in – making data discovery, data visualization and telling data-stories as easy as 1-2-3. Happy Coding, Happy Analyzing and Happy Friday!
To leave a comment for the author, please follow the link and comment on their blog: All Things R.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.