Site icon R-bloggers

Data Preparation – Part II

[This article was first published on Flavio Barros » r-bloggers, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

This time i will talk about how to deal with large text files in chuncks with R. Just to provide some real data to work with download Airlines data, relative to 1988; from now on i will work with this file.

To work with this data i will use  iterators package. This package allow you pass the file, line by line, or chunck by chunk, without really load all file to memory. As you can feel the idea try this code:

install.packages('iterators')
library(iterators)
con <- bzfile('1988.csv.bz2', 'r')

OK, now you have a connection to your file. Let’s create a iterator:

it <- ireadLines(con, n=1)
nextElem(it)
nextElem(it)

As you can see you are printing line by line. So, you can work with one line, or a chunk of data even with a large file. If you want to read line by line till the end of the file you can use something like this:

tryCatch(expr=nextElem(it), error=function(e) return(FALSE))

that returns a FALSE at the end of the file. This a very useful trick in data preparation with large text files.

The post Data Preparation – Part II appeared first on Flavio Barros .

To leave a comment for the author, please follow the link and comment on their blog: Flavio Barros » r-bloggers.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.