Efficiency of Importing Large CSV Files in R
[This article was first published on Yet Another Blog in Statistical Computing » S+/R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
### size of csv file: 689.4MB (7,009,728 rows * 29 columns) ###
system.time(read.csv('../data/2008.csv', header = T))
# user system elapsed
# 88.301 2.416 90.716
library(data.table)
system.time(fread('../data/2008.csv', header = T, sep = ','))
# user system elapsed
# 4.740 0.048 4.785
library(bigmemory)
system.time(read.big.matrix('../data/2008.csv', header = T))
# user system elapsed
# 59.544 0.764 60.308
library(ff)
system.time(read.csv.ffdf(file = '../data/2008.csv', header = T))
# user system elapsed
# 60.028 1.280 61.335
library(sqldf)
system.time(read.csv.sql('../data/2008.csv'))
# user system elapsed
# 87.461 3.880 91.447
To leave a comment for the author, please follow the link and comment on their blog: Yet Another Blog in Statistical Computing » S+/R.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.